Unnamed: 0
int64 0
41k
| title
stringlengths 4
274
| category
stringlengths 5
18
| summary
stringlengths 22
3.66k
| theme
stringclasses 8
values |
---|---|---|---|---|
3,100 |
Nonparametric Volatility Density Estimation
|
math.ST
|
We consider two kinds of stochastic volatility models. Both kinds of models
contain a stationary volatility process, the density of which, at a fixed
instant in time, we aim to estimate.
We discuss discrete time models where for instance a log price process is
modeled as the product of a volatility process and i.i.d. noise. We also
consider samples of certain continuous time diffusion processes. The sampled
time instants will be be equidistant with vanishing distance.
A Fourier type deconvolution kernel density estimator based on the logarithm
of the squared processes is proposed to estimate the volatility density.
Expansions of the bias and bounds on the variances are derived.
|
math
|
3,101 |
Asymptotic accuracy of the jackknife variance estimator for certain smooth statistics
|
math.ST
|
We show that that the jackknife variance estimator $v_{jack}$ and the the
infinitesimal jackknife variance estimator are asymptotically equivalent if the
functional of interest is a smooth function of the mean or a smooth trimmed
L-statistic. We calculate the asymptotic variance of $v_{jack}$ for these
functionals.
|
math
|
3,102 |
Approximating distribution functions by iterated function systems
|
math.ST
|
In this paper an iterated function system on the space of distribution
functions is built. The inverse problem is introduced and studied by convex
optimization problems. Some applications of this method to approximation of
distribution functions and to estimation theory are given.
|
math
|
3,103 |
Statistical analysis of stochastic resonance with ergodic diffusion noise
|
math.ST
|
A subthreshold signal is transmitted through a channel and may be detected
when some noise -- with known structure and proportional to some level -- is
added to the data. There is an optimal noise level, called stochastic
resonance, that corresponds to the highest Fisher information in the problem of
estimation of the signal. As noise we consider an ergodic diffusion process and
the asymptotic is considered as time goes to infinity. We propose consistent
estimators of the subthreshold signal and we solve further a problem of
hypotheses testing. We also discuss evidence of stochastic resonance for both
estimation and hypotheses testing problems via examples.
|
math
|
3,104 |
Asymptotic normality of kernel type deconvolution estimators
|
math.ST
|
We derive asymptotic normality of kernel type deconvolution estimators of the
density, the distribution function at a fixed point, and of the probability of
an interval. We consider the so called super smooth case where the
characteristic function of the known distribution decreases exponentially.
It turns out that the limit behavior of the pointwise estimators of the
density and distribution function is relatively straightforward while the
asymptotics of the estimator of the probability of an interval depends in a
complicated way on the sequence of bandwidths.
|
math
|
3,105 |
Annuities under random rates of interest - revisited
|
math.ST
|
In the article we consider accumulated values of annuities-certain with
yearly payments with independent random interest rates. We focus on annuities
with payments varying in arithmetic and geometric progression which are
important basic varying annuities (see Kellison, 1991). They appear to be a
generalization of the types studied recently by Zaks (2001). We derive, via
recursive relationships, mean and variance formulae of the final values of the
annuities. As a consequence, we obtain moments related to the already discussed
cases, which leads to a correction of main results from Zaks (2001).
|
math
|
3,106 |
Estimation of Weibull Shape Parameter by Shrinkage Towards an Interval Under Failure Censored Sampling
|
math.ST
|
This paper is speculated to propose a class of shrinkage estimators for shape
parameter beta in failure censored samples from two-parameter Weibull
distribution when some 'apriori' or guessed interval containing the parameter
beta is available in addition to sample information and analyses their
properties. Some estimators are generated from the proposed class and compared
with the minimum mean squared error (MMSE) estimator. Numerical computations in
terms of percent relative efficiency and absolute relative bias indicate that
certain of these estimators substantially improve the MMSE estimator in some
guessed interval of the parameter space of beta, especially for censored
samples with small sizes. Subsequently, a modified class of shrinkage
estimators is proposed with its properties.
|
math
|
3,107 |
Estimating a structural distribution function by grouping
|
math.ST
|
By the method of Poissonization we confirm some existing results concerning
consistent estimation of the structural distribution function in the situation
of a large number of rare events. Inconsistency of the so called natural
estimator is proved. The method of grouping in cells of equal size is
investigated and its consistency derived. A bound on the mean squared error is
derived.
|
math
|
3,108 |
An Illuminating Counterexample
|
math.ST
|
We give a visually appealing counterexample to the proposition that unbiased
estimators are better than biased estimators.
|
math
|
3,109 |
Nonparametric volatility density estimation for discrete time models
|
math.ST
|
We consider discrete time models for asset prices with a stationary
volatility process. We aim at estimating the multivariate density of this
process at a set of consecutive time instants. A Fourier type deconvolution
kernel density estimator based on the logarithm of the squared process is
proposed to estimate the volatility density. Expansions of the bias and bounds
on the variance are derived.
|
math
|
3,110 |
On-line tracking of a smooth regression function
|
math.ST
|
We construct an on-line estimator with equidistant design for tracking a
smooth function from Stone-Ibragimov-Khasminskii class. This estimator has the
optimal convergence rate of risk to zero in sample size. The procedure for
setting coefficients of the estimator is controlled by a single parameter and
has a simple numerical solution. The off-line version of this estimator allows
to eliminate a boundary layer. Simulation results are given.
|
math
|
3,111 |
Estimating the structural distribution function of cell probabilities
|
math.ST
|
We consider estimation of the structural distribution function of the cell
probabilities of a multinomial sample in situations where the number of cells
is large. We review the performance of the natural estimator, an estimator
based on grouping the cells and a kernel type estimator. Inconsistency of the
natural estimator and weak consistency of the other two estimators is derived
by Poissonization and other, new, technical devices.
|
math
|
3,112 |
Combining kernel estimators in the uniform deconvolution problem
|
math.ST
|
We construct a density estimator and an estimator of the distribution
function in the uniform deconvolution model. The estimators are based on
inversion formulas and kernel estimators of the density of the observations and
its derivative. Asymptotic normality and the asymptotic biases are derived.
|
math
|
3,113 |
Asymptotic Normality of Nonparametric Kernel Type Deconvolution Density Estimators: crossing the Cauchy boundary
|
math.ST
|
We derive asymptotic normality of kernel type deconvolution density
estimators. In particular we consider deconvolution problems where the known
component of the convolution has a symmetric lambda-stable distribution,
0<lambda<= 2. It turns out that the limit behavior changes if the exponent
parameter lambda passes the value one, the case of Cauchy deconvolution.
|
math
|
3,114 |
Asymptotically efficient estimation of linear functionals in inverse regression models
|
math.ST
|
In this paper we will discuss a procedure to improve the usual estimator of a
linear functional of the unknown regression function in inverse nonparametric
regression models. In Klaassen, Lee, and Ruymgaart (2001) it has been proved
that this traditional estimator is not asymptotically efficient (in the sense
of the H\'{a}jek - Le Cam convolution theorem) except, possibly, when the error
distribution is normal. Since this estimator, however, is still root-n
consistent a procedure in Bickel, Klaassen, Ritov, and Wellner (1993) applies
to construct a modification which is asymptotically efficient. A self-contained
proof of the asymptotic efficiency is included.
|
math
|
3,115 |
Emerging applications of geometric multiscale analysis
|
math.ST
|
Classical multiscale analysis based on wavelets has a number of successful
applications, e.g. in data compression, fast algorithms, and noise removal.
Wavelets, however, are adapted to point singularities, and many phenomena in
several variables exhibit intermediate-dimensional singularities, such as
edges, filaments, and sheets. This suggests that in higher dimensions, wavelets
ought to be replaced in certain applications by multiscale analysis adapted to
intermediate-dimensional singularities.
My lecture described various initial attempts in this direction. In
particular, I discussed two approaches to geometric multiscale analysis
originally arising in the work of Harmonic Analysts Hart Smith and Peter Jones
(and others): (a) a directional wavelet transform based on parabolic dilations;
and (b) analysis via anistropic strips. Perhaps surprisingly, these tools have
potential applications in data compression, inverse problems, noise removal,
and signal detection; applied mathematicians, statisticians, and engineers are
eagerly pursuing these leads.
|
math
|
3,116 |
Hidden Markov and state space models: asymptotic analysis of exact and approximate methods for prediction, filtering, smoothing and statistical inference
|
math.ST
|
State space models have long played an important role in signal processing.
The Gaussian case can be treated algorithmically using the famous Kalman
filter. Similarly since the 1970s there has been extensive application of
Hidden Markov models in speech recognition with prediction being the most
important goal. The basic theoretical work here, in the case $X$ and $Y$ finite
(small) providing both algorithms and asymptotic analysis for inference is that
of Baum and colleagues. During the last 30-40 years these general models have
proved of great value in applications ranging from genomics to finance.
Unless the $X,Y$ are jointly Gaussian or $X$ is finite and small the problem
of calculating the distributions discussed and the likelihood exactly are
numerically intractable and if $Y$ is not finite asymptotic analysis becomes
much more difficult. Some new developments have been the construction of
so-called ``particle filters'' (Monte Carlo type) methods for approximate
calculation of these distributions (see Doucet et al. [4]) for instance and
general asymptotic methods for analysis of statistical methods in HMM [2] and
other authors.
We will discuss these methods and results in the light of exponential mixing
properties of the conditional (posterior) distribution of $(X_1,X_2,...)$ given
$(Y_1,Y_2,...)$ already noted by Baum and Petrie and recent work of the authors
Bickel, Ritov and Ryden, Del Moral and Jacod, Douc and Matias.
|
math
|
3,117 |
Statistical equivalence and stochastic process limit theorems
|
math.ST
|
A classical limit theorem of stochastic process theory concerns the sample
cumulative distribution function (CDF) from independent random variables. If
the variables are uniformly distributed then these centered CDFs converge in a
suitable sense to the sample paths of a Brownian Bridge. The so-called
Hungarian construction of Komlos, Major and Tusnady provides a strong form of
this result. In this construction the CDFs and the Brownian Bridge sample paths
are coupled through an appropriate representation of each on the same
measurable space, and the convergence is uniform at a suitable rate.
Within the last decade several asymptotic statistical-equivalence theorems
for nonparametric problems have been proven, beginning with Brown and Low
(1996) and Nussbaum (1996). The approach here to statistical-equivalence is
firmly rooted within the asymptotic statistical theory created by L. Le Cam but
in some respects goes beyond earlier results.
This talk demonstrates the analogy between these results and those from the
coupling method for proving stochastic process limit theorems. These two
classes of theorems possess a strong inter-relationship, and technical methods
from each domain can profitably be employed in the other. Results in a recent
paper by Carter, Low, Zhang and myself will be described from this perspective.
|
math
|
3,118 |
Asymptotic equivalence of the jackknife and infinitesimal jackknife variance estimators for some smooth statistics
|
math.ST
|
The jackknife variance estimator and the the infinitesimal jackknife variance
estimator are shown to be asymptotically equivalent if the functional of
interest is a smooth function of the mean or a trimmed L-statistic with Hoelder
continuous weight function.
|
math
|
3,119 |
Selection Criterion for Log-Linear Models Using Statistical Learning Theory
|
math.ST
|
Log-linear models are a well-established method for describing statistical
dependencies among a set of n random variables. The observed frequencies of the
n-tuples are explained by a joint probability such that its logarithm is a sum
of functions, where each function depends on as few variables as possible. We
obtain for this class a new model selection criterion using nonasymptotic
concepts of statistical learning theory. We calculate the VC dimension for the
class of k-factor log-linear models. In this way we are not only able to select
the model with the appropriate complexity, but obtain also statements on the
reliability of the estimated probability distribution. Furthermore we show that
the selection of the best model among a set of models with the same complexity
can be written as a convex optimization problem.
|
math
|
3,120 |
Efficient estimation in the accelerated failure time model under cross sectional sampling
|
math.ST
|
Consider estimation of the regression parameter in the accelerated failure
time model, when data are obtained by cross sectional sampling. It is shown
that it is possible under regularity of the model to construct an efficient
estimator of the unknown Euclidean regression parameter if the distribution of
the covariate vector is known and also if it is unknown with vanishing mean.
|
math
|
3,121 |
Parametric Estimation of Diffusion Processes Sampled at First Exit Times
|
math.ST
|
This paper introduces a family of recursively defined estimators of the
parameters of a diffusion process. We use ideas of stochastic algorithms for
the construction of the estimators. Asymptotic consistency of these estimators
and asymptotic normality of an appropriate normalization are proved. The
results are applied to two examples from the financial literature; viz.,
Cox-Ingersoll-Ross' model and the constant elasticity of variance (CEV) process
illustrate the use of the technique proposed herein.
|
math
|
3,122 |
Rates of convergence for constrained deconvolution problem
|
math.ST
|
Let $X$ and $Y$ be two independent identically distributed random variables
with density $p(x)$ and $Z=\alpha X+\beta Y$ for some constants $\alpha>0$ and
$\beta>0$. We consider the problem of estimating $p(x)$ by means of the samples
from the distribution of $Z$. Non-parametric estimator based on the sync kernel
is constructed and asymptotic behaviour of the corresponding mean integrated
square error is investigated.
|
math
|
3,123 |
On the largest eigenvalue of Wishart matrices with identity covariance when n, p and p/n tend to infinity
|
math.ST
|
Let X be a n*p matrix and l_1 the largest eigenvalue of the covariance matrix
X^{*}*X. The "null case" where X_{i,j} are independent Normal(0,1) is of
particular interest for principal component analysis. For this model, when n, p
tend to infinity and n/p tends to gamma in (0,\infty), it was shown in
Johnstone (2001) that l_1, properly centered and scaled, converges to the
Tracy-Widom law. We show that with the same centering and scaling, the result
is true even when p/n or n/p tends to infinity. The derivation uses ideas and
techniques quite similar to the ones presented in Johnstone (2001). Following
Soshnikov (2002), we also show that the same is true for the joint distribution
of the k largest eigenvalues, where k is a fixed integer. Numerical experiments
illustrate the fact that the Tracy-Widom approximation is reasonable even when
one of the dimension is "small".
|
math
|
3,124 |
The marginalization paradox does not imply inconsistency for improper priors
|
math.ST
|
The marginalization paradox involves a disagreement between two Bayesians who
use two different procedures for calculating a posterior in the presence of an
improper prior. We show that the argument used to justify the procedure of one
of the Bayesians is inapplicable. There is therefore no reason to expect
agreement, no paradox, and no evidence that improper priors are inherently
inconsistent. We show further that the procedure in question can be interpreted
as the cancellation of infinities in the formal posterior. We suggest that the
implicit use of this formal procedure is the source of the observed
disagreement.
|
math
|
3,125 |
Nonparametric Estimation in the Model of Moving Average
|
math.ST
|
The subject of robust estimation in time series is widely discussed in
literature. One of the approaches is to use GM-estimation. This method
incorporates a broad class of nonparametric estimators which under suitable
conditions includes estimators robust to outliers in data. For the linear
models the sensitivity of GM-estimators to outliers have been studied in the
work by Martin and Yohai [5], and influence functionals for this estimator were
derived. In this paper we follow this direction and examine the asymptotical
properties of the class of M-estimators, which is narrower than the class of
GM-estimators, but gives more insight into asymptotical properties of such
estimators. This paper gives an asymptotic expansion of the residual weighted
empirical process, which allows to prove asymptotic normality of these
estimators in case of non-smooth objective functions. For simplicity MA(1)
model is considered, but it will be shown that even in this case mathematical
techniques used to derive these asymptotic properties appear to be rather
complicated.However, the approach used in this paper could be applied to
GM-estimators and to more realistic models.
|
math
|
3,126 |
Grade of Membership Analysis: One Possible Approach to Foundations
|
math.ST
|
Grade of membership (GoM) analysis was introduced in 1974 as a means of
analyzing multivariate categorical data. Since then, it has been successfully
applied to many problems. The primary goal of GoM analysis is to derive
properties of individuals based on results of multivariate measurements; such
properties are given in the form of the expectations of a hidden random
variable (state of an individual) conditional on the result of observations.
In this article, we present a new perspective for the GoM model, based on
considering distribution laws of observed random variables as realizations of
another random variable. It happens that some moments of this new random
variable are directly estimable from observations. Our approach allows us to
establish a number of important relations between estimable moments and values
of interest, which, in turn, provides a basis for a new numerical procedure.
|
math
|
3,127 |
The suppport reduction algorithm for computing nonparametric function estimates in mixture models
|
math.ST
|
Vertex direction algorithms have been around for a few decades in the
experimental design and mixture models literature. We briefly review this type
of algorithm and describe a new member of the family: the support reduction
algorithm. The support reduction algorithm is applied to the problem of
computing nonparametric estimates in two inverse problems: convex density
estimation and the Gaussian deconvolution problem. Usually, VD algorithms solve
a finite dimensional (version of the) optimization problem of interest. We
introduce a method to solve the true infinite dimensional optimization problem.
|
math
|
3,128 |
Multiscale likelihood analysis and complexity penalized estimation
|
math.ST
|
We describe here a framework for a certain class of multiscale likelihood
factorizations wherein, in analogy to a wavelet decomposition of an L^2
function, a given likelihood function has an alternative representation as a
product of conditional densities reflecting information in both the data and
the parameter vector localized in position and scale. The framework is
developed as a set of sufficient conditions for the existence of such
factorizations, formulated in analogy to those underlying a standard
multiresolution analysis for wavelets, and hence can be viewed as a
multiresolution analysis for likelihoods. We then consider the use of these
factorizations in the task of nonparametric, complexity penalized likelihood
estimation. We study the risk properties of certain thresholding and
partitioning estimators, and demonstrate their adaptivity and near-optimality,
in a minimax sense over a broad range of function spaces, based on squared
Hellinger distance as a loss function. In particular, our results provide an
illustration of how properties of classical wavelet-based estimators can be
obtained in a single, unified framework that includes models for continuous,
count and categorical data types.
|
math
|
3,129 |
Confidence balls in Gaussian regression
|
math.ST
|
Starting from the observation of an R^n-Gaussian vector of mean f and
covariance matrix \sigma^2 I_n (I_n is the identity matrix), we propose a
method for building a Euclidean confidence ball around f, with prescribed
probability of coverage. For each n, we describe its nonasymptotic property and
show its optimality with respect to some criteria.
|
math
|
3,130 |
Minimax estimation of linear functionals over nonconvex parameter spaces
|
math.ST
|
The minimax theory for estimating linear functionals is extended to the case
of a finite union of convex parameter spaces. Upper and lower bounds for the
minimax risk can still be described in terms of a modulus of continuity.
However in contrast to the theory for convex parameter spaces rate optimal
procedures are often required to be nonlinear. A construction of such nonlinear
procedures is given. The results developed in this paper have important
applications to the theory of adaptation.
|
math
|
3,131 |
Statistical inference for time-inhomogeneous volatility models
|
math.ST
|
This paper offers a new approach for estimating and forecasting the
volatility of financial time series. No assumption is made about the parametric
form of the processes. On the contrary, we only suppose that the volatility can
be approximated by a constant over some interval. In such a framework, the main
problem consists of filtering this interval of time homogeneity; then the
estimate of the volatility can be simply obtained by local averaging.
We construct a locally adaptive volatility estimate (LAVE) which can perform
this task and investigate it both from the theoretical point of view and
through Monte Carlo simulations. Finally, the LAVE procedure is applied to a
data set of nine exchange rates and a comparison with a standard GARCH model is
also provided. Both models appear to be capable of explaining many of the
features of the data; nevertheless, the new approach seems to be superior to
the GARCH method as far as the out-of-sample results are concerned.
|
math
|
3,132 |
Estimating invariant laws of linear processes by U-statistics
|
math.ST
|
Suppose we observe an invertible linear process with independent mean-zero
innovations and with coefficients depending on a finite-dimensional parameter,
and we want to estimate the expectation of some function under the stationary
distribution of the process. The usual estimator would be the empirical
estimator. It can be improved using the fact that the innovations are centered.
We construct an even better estimator using the representation of the
observations as infinite-order moving averages of the innovations. Then the
expectation of the function under the stationary distribution can be written as
the expectation under the distribution of an infinite series in terms of the
innovations, and it can be estimated by a U-statistic of increasing order
(also called an ``infinite-order U-statistic'') in terms of the estimated
innovations. The estimator can be further improved using the fact that the
innovations are centered. This improved estimator is optimal if the
coefficients of the linear process are estimated optimally.
|
math
|
3,133 |
The efficiency of the estimators of the parameters in GARCH processes
|
math.ST
|
We propose a class of estimators for the parameters of a GARCH(p,q) sequence.
We show that our estimators are consistent and asymptotically normal under
mild conditions. The quasi-maximum likelihood and the likelihood estimators are
discussed in detail. We show that the maximum likelihood estimator is optimal.
If the tail of the distribution of the innovations is polynomial, even a
quasi-maximum likelihood estimator based on exponential density performs better
than the standard normal density-based quasi-likelihood estimator of Lee and
Hansen and Lumsdaine.
|
math
|
3,134 |
Selecting optimal multistep predictors for autoregressive processes of unknown order
|
math.ST
|
We consider the problem of choosing the optimal (in the sense of mean-squared
prediction error) multistep predictor for an autoregressive (AR) process of
finite but unknown order. If a working AR model (which is possibly
misspecified) is adopted for multistep predictions, then two competing types of
multistep predictors (i.e., plug-in and direct predictors) can be obtained from
this model. We provide some interesting examples to show that when both plug-in
and direct predictors are considered, the optimal multistep prediction results
cannot be guaranteed by correctly identifying the underlying model's order.
This finding challenges the traditional model (order) selection criteria, which
usually aim to choose the order of the true model. A new prediction selection
criterion, which attempts to seek the best combination of the prediction order
and the prediction method, is proposed to rectify this difficulty. When the
underlying model is stationary, the validity of the proposed criterion is
justified theoretically.
|
math
|
3,135 |
Missing at random, likelihood ignorability and model completeness
|
math.ST
|
This paper provides further insight into the key concept of missing at random
(MAR) in incomplete data analysis. Following the usual selection modelling
approach we envisage two models with separable parameters: a model for the
response of interest and a model for the missing data mechanism
(MDM). If the response model is given by a complete density family, then
frequentist inference from the likelihood function ignoring the MDM is valid if
and only if the MDM is MAR. This necessary and sufficient condition also holds
more generally for models for coarse data, such as censoring.
Examples are given to show the necessity of the completeness of the
underlying model for this equivalence to hold.
|
math
|
3,136 |
Information bounds for Cox regression models with missing data
|
math.ST
|
We derive information bounds for the regression parameters in Cox models when
data are missing at random. These calculations are of interest for
understanding the behavior of efficient estimation in case-cohort designs, a
type of two-phase design often used in cohort studies. The derivations make use
of key lemmas appearing in Robins, Rotnitzky and Zhao [J. Amer. Statist. Assoc.
89 (1994) 846-866] and Robins, Hsieh and Newey [J. Roy. Statist. Soc. Ser. B 57
(1995) 409-424], but in a form suited for our purposes here. We begin by
summarizing the results of Robins, Rotnitzky and Zhao in a form that leads
directly to the projection method which will be of use for our model of
interest. We then proceed to derive new information bounds for the regression
parameters of the Cox model with data Missing At Random (MAR). In the final
section we exemplify our calculations with several models of interest in cohort
studies, including an i.i.d. version of the classical case-cohort design of
Prentice [Biometrika 73 (1986) 1-11]
|
math
|
3,137 |
Finite sample properties of multiple imputation estimators
|
math.ST
|
Finite sample properties of multiple imputation estimators under the linear
regression model are studied. The exact bias of the multiple imputation
variance estimator is presented. A method of reducing the bias is presented and
simulation is used to make comparisons. We also show that the suggested method
can be used for a general class of linear estimators.
|
math
|
3,138 |
Sufficient burn-in for Gibbs samplers for a hierarchical random effects model
|
math.ST
|
We consider Gibbs and block Gibbs samplers for a Bayesian hierarchical
version of the one-way random effects model. Drift and minorization conditions
are established for the underlying Markov chains. The drift and minorization
are used in conjunction with results from J. S. Rosenthal [J. Amer. Statist.
Assoc. 90 (1995) 558-566] and G. O. Roberts and R. L. Tweedie [Stochastic
Process. Appl. 80 (1999) 211-229] to construct analytical upper bounds on the
distance to stationarity. These lead to upper bounds on the amount of burn-in
that is required to get the chain within a prespecified (total variation)
distance of the stationary distribution. The results are illustrated with a
numerical example.
|
math
|
3,139 |
Mean squared error of empirical predictor
|
math.ST
|
The term ``empirical predictor'' refers to a two-stage predictor of a linear
combination of fixed and random effects. In the first stage, a predictor is
obtained but it involves unknown parameters; thus, in the second stage, the
unknown parameters are replaced by their estimators. In this paper, we consider
mean squared errors (MSE) of empirical predictors under a general setup, where
ML or REML estimators are used for the second stage. We obtain second-order
approximation to the MSE as well as an estimator of the MSE correct to the same
order. The general results are applied to mixed linear models to obtain a
second-order approximation to the MSE of the empirical best linear unbiased
predictor (EBLUP) of a linear mixed effect and an estimator of the MSE of EBLUP
whose bias is correct to second order. The general mixed linear model includes
the mixed ANOVA model and the longitudinal model as special cases.
|
math
|
3,140 |
Least Angle Regression
|
math.ST
|
The purpose of model selection algorithms such as All Subsets, Forward
Selection and Backward Elimination is to choose a linear model on the basis
of the same set of data to which the model will be applied. Typically we have
available a large collection of possible covariates from which we hope to
select a parsimonious set for the efficient prediction of a response variable.
Least Angle Regression (LARS), a new model selection algorithm, is a useful and
less greedy version of traditional forward selection methods.
Three main properties are derived: (1) A simple modification of the LARS
algorithm implements the Lasso, an attractive version of ordinary least squares
that constrains the sum of the absolute regression coefficients; the LARS
modification calculates all possible Lasso estimates for a given problem, using
an order of magnitude less computer time than previous methods.
(2) A different LARS modification efficiently implements Forward Stagewise
linear regression, another promising new model selection method;
|
math
|
3,141 |
Training samples in objective Bayesian model selection
|
math.ST
|
Central to several objective approaches to Bayesian model selection is the
use of training samples (subsets of the data), so as to allow utilization of
improper objective priors. The most common prescription for choosing training
samples is to choose them to be as small as possible, subject to yielding
proper posteriors; these are called minimal training samples.
When data can vary widely in terms of either information content or impact on
the improper priors, use of minimal training samples can be inadequate.
Important examples include certain cases of discrete data, the presence of
censored observations, and certain situations involving linear models and
explanatory variables. Such situations require more sophisticated methods of
choosing training samples. A variety of such methods are developed in this
paper, and successfully applied in challenging situations.
|
math
|
3,142 |
Local Whittle estimation in nonstationary and unit root cases
|
math.ST
|
Asymptotic properties of the local Whittle estimator in the nonstationary
case (d>{1/2}) are explored. For {1/2}<d\leq 1, the estimator is shown to be
consistent, and its limit distribution and the rate of convergence depend on
the value of d. For d=1, the limit distribution is mixed normal.
For d>1 and when the process has a polynomial trend of order \alpha >{1/2},
the estimator is shown to be inconsistent and to converge in probability to
unity.
|
math
|
3,143 |
Discussion of "Least angle regression" by Efron et al
|
math.ST
|
Discussion of ``Least angle regression'' by Efron et al. [math.ST/0406456]
|
math
|
3,144 |
Optimal predictive model selection
|
math.ST
|
Often the goal of model selection is to choose a model for future prediction,
and it is natural to measure the accuracy of a future prediction by squared
error loss. Under the Bayesian approach, it is commonly perceived that the
optimal predictive model is the model with highest posterior probability, but
this is not necessarily the case. In this paper we show that, for selection
among normal linear models, the optimal predictive model is often the median
probability model, which is defined as the model consisting of those variables
which have overall posterior probability greater than or equal to 1/2 of being
in a model. The median probability model often differs from the highest
probability model.
|
math
|
3,145 |
Consistent covariate selection and post model selection inference in semiparametric regression
|
math.ST
|
This paper presents a model selection technique of estimation in
semiparametric regression models of the type
Y_i=\beta^{\prime}\underbarX_i+f(T_i)+W_i, i=1,...,n. The parametric and
nonparametric components are estimated simultaneously by this procedure.
Estimation is based on a collection of finite-dimensional models, using a
penalized least squares criterion for selection. We show that by tailoring the
penalty terms developed for nonparametric regression to semiparametric models,
we can consistently estimate the subset of nonzero coefficients of the linear
part. Moreover, the selected estimator of the linear component is
asymptotically normal.
|
math
|
3,146 |
Nonconcave penalized likelihood with a diverging number of parameters
|
math.ST
|
A class of variable selection procedures for parametric models via nonconcave
penalized likelihood was proposed by Fan and Li to simultaneously estimate
parameters and select important variables. They demonstrated that this class of
procedures has an oracle property when the number of parameters is finite.
However, in most model selection problems the number of parameters should be
large and grow with the sample size. In this paper some asymptotic properties
of the nonconcave penalized likelihood are established for situations in which
the number of parameters tends to \infty as the sample size increases.
Under regularity conditions we have established an oracle property and the
asymptotic normality of the penalized likelihood estimators. Furthermore, the
consistency of the sandwich formula of the covariance matrix is demonstrated.
Nonconcave penalized likelihood ratio statistics are discussed, and their
asymptotic distributions under the null hypothesis are obtained by imposing
some mild conditions on the penalty functions.
|
math
|
3,147 |
Discussion of "Least angle regression" by Efron et al
|
math.ST
|
Discussion of ``Least angle regression'' by Efron et al. [math.ST/0406456]
|
math
|
3,148 |
Discussion of "Least angle regression" by Efron et al
|
math.ST
|
Discussion of ``Least angle regression'' by Efron et al. [math.ST/0406456]
|
math
|
3,149 |
Discussion of "Least angle regression" by Efron et al
|
math.ST
|
Discussion of ``Least angle regression'' by Efron et al. [math.ST/0406456]
|
math
|
3,150 |
Discussion of "Least angle regression" by Efron et al
|
math.ST
|
Discussion of ``Least angle regression'' by Efron et al. [math.ST/0406456]
|
math
|
3,151 |
Discussion of "Least angle regression" by Efron et al
|
math.ST
|
Discussion of ``Least angle regression'' by Efron et al. [math.ST/0406456]
|
math
|
3,152 |
Discussion of "Least angle regression" by Efron et al
|
math.ST
|
Discussion of ``Least angle regression'' by Efron et al. [math.ST/0406456]
|
math
|
3,153 |
Discussion of "Least angle regression" by Efron et al
|
math.ST
|
Discussion of ``Least angle regression'' by Efron et al. [math.ST/0406456]
|
math
|
3,154 |
Rejoinder to "Least angle regression" by Efron et al
|
math.ST
|
Rejoinder to ``Least angle regression'' by Efron et al. [math.ST/0406456]
|
math
|
3,155 |
Martingale transforms goodness-of-fit tests in regression models
|
math.ST
|
This paper discusses two goodness-of-fit testing problems. The first problem
pertains to fitting an error distribution to an assumed nonlinear parametric
regression model, while the second pertains to fitting a parametric regression
model when the error distribution is unknown. For the first problem the paper
contains tests based on a certain martingale type transform of residual
empirical processes. The advantage of this transform is that the corresponding
tests are asymptotically distribution free. For the second problem the proposed
asymptotically distribution free tests are based on innovation martingale
transforms. A Monte Carlo study shows that the simulated level of the proposed
tests is close to the asymptotic level for moderate sample sizes.
|
math
|
3,156 |
A stochastic process approach to false discovery control
|
math.ST
|
This paper extends the theory of false discovery rates (FDR) pioneered by
Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B 57 (1995) 289-300].
We develop a framework in which the False Discovery Proportion (FDP)--the
number of false rejections divided by the number of rejections--is treated as a
stochastic process. After obtaining the limiting distribution of the process,
we demonstrate the validity of a class of procedures for controlling the False
Discovery Rate (the expected FDP). We construct a confidence envelope for the
whole FDP process. From these envelopes we derive confidence thresholds, for
controlling the quantiles of the distribution of the FDP as well as controlling
the number of false discoveries. We also investigate methods for estimating the
p-value distribution.
|
math
|
3,157 |
Testing predictor contributions in sufficient dimension reduction
|
math.ST
|
We develop tests of the hypothesis of no effect for selected predictors in
regression, without assuming a model for the conditional distribution of the
response given the predictors. Predictor effects need not be limited to the
mean function and smoothing is not required. The general approach is based on
sufficient dimension reduction, the idea being to replace the predictor vector
with a lower-dimensional version without loss of information on the regression.
Methodology using sliced inverse regression is developed in detail.
|
math
|
3,158 |
Density estimation for biased data
|
math.ST
|
The concept of biased data is well known and its practical applications range
from social sciences and biology to economics and quality control.
These observations arise when a sampling procedure chooses an observation
with probability that depends on the value of the observation. This is an
interesting sampling procedure because it favors some observations and neglects
others. It is known that biasing does not change rates of nonparametric density
estimation, but no results are available about sharp constants.
This article presents asymptotic results on sharp minimax density estimation.
In particular, a coefficient of difficulty is introduced that shows the
relationship between sample sizes of direct and biased samples that imply the
same accuracy of estimation. The notion of the restricted local minimax, where
a low-frequency part of the estimated density is known, is introduced; it sheds
new light on the phenomenon of nonparametric superefficiency.
Results of a numerical study are presented.
|
math
|
3,159 |
Semiparametric density estimation by local L_2-fitting
|
math.ST
|
This article examines density estimation by combining a parametric approach
with a nonparametric factor. The plug-in parametric estimator is seen as a
crude estimator of the true density and is adjusted by a nonparametric factor.
The nonparametric factor is derived by a criterion called local
L_2-fitting. A class of estimators that have multiplicative adjustment is
provided, including estimators proposed by several authors as special cases,
and the asymptotic theories are developed. Theoretical comparison reveals that
the estimators in this class are better than, or at least competitive with, the
traditional kernel estimator in a broad class of densities. The asymptotically
best estimator in this class can be obtained from the elegant feature of the
bias function.
|
math
|
3,160 |
Empirical-likelihood-based confidence interval for the mean with a heavy-tailed distribution
|
math.ST
|
Empirical-likelihood-based confidence intervals for a mean were introduced by
Owen [Biometrika 75 (1988) 237-249], where at least a finite second moment is
required. This excludes some important distributions, for example, those in the
domain of attraction of a stable law with index between 1 and 2. In this
article we use a method similar to Qin and Wong [Scand.
J. Statist. 23 (1996) 209-219] to derive an empirical-likelihood-based
confidence interval for the mean when the underlying distribution has heavy
tails. Our method can easily be extended to obtain a confidence interval for
any order of moment of a heavy-tailed distribution.
|
math
|
3,161 |
Bounds on coverage probabilities of the empirical likelihood ratio confidence regions
|
math.ST
|
This paper studies the least upper bounds on coverage probabilities of the
empirical likelihood ratio confidence regions based on estimating equations.
The implications of the bounds on empirical likelihood inference are also
discussed.
|
math
|
3,162 |
Estimation of fractal dimension for a class of Non-Gaussian stationary processes and fields
|
math.ST
|
We present the asymptotic distribution theory for a class of increment-based
estimators of the fractal dimension of a random field of the form g{X(t)},
where g:R\to R is an unknown smooth function and X(t) is a real-valued
stationary Gaussian field on R^d, d=1 or 2, whose covariance function obeys a
power law at the origin. The relevant theoretical framework here is ``fixed
domain'' (or ``infill'') asymptotics. Surprisingly, the limit theory in this
non-Gaussian case is somewhat richer than in the Gaussian case (the latter is
recovered when g is affine), in part because estimators of the type considered
may have an asymptotic variance which is random in the limit. Broadly, when g
is smooth and nonaffine, three types of limit distributions can arise, types
(i), (ii) and (iii), say. Each type can be represented as a random integral.
More specifically, type (i) can be represented as the integral of a certain
random function with respect to Lebesgue measure; type (ii) can be represented
as the integral of a second random function
|
math
|
3,163 |
The empirical process on Gaussian spherical harmonics
|
math.ST
|
We establish weak convergence of the empirical process on the spherical
harmonics of a Gaussian random field in the presence of an unknown angular
power spectrum. This result suggests various Gaussianity tests with an
asymptotic justification. The issue of testing for Gaussianity on isotropic
spherical random fields has recently received strong empirical attention in the
cosmological literature, in connection with the statistical analysis of cosmic
microwave background radiation.
|
math
|
3,164 |
Monomial ideals and the Scarf complex for coherent systems in reliability theory
|
math.ST
|
A certain type of integer grid, called here an echelon grid, is an object
found both in coherent systems whose components have a finite or countable
number of levels and in algebraic geometry. If \alpha=(\alpha_1,...,\alpha_d)
is an integer vector representing the state of a system, then the corresponding
algebraic object is a monomial x_1^{\alpha_1}... x_d^{\alpha_d} in the
indeterminates x_1,..., x_d. The idea is to relate a coherent system to
monomial ideals, so that the so-called Scarf complex of the monomial ideal
yields an inclusion-exclusion identity for the probability of failure, which
uses many fewer terms than the classical identity. Moreover in the ``general
position'' case we obtain via the Scarf complex the tube bounds given by Naiman
and Wynn [J. Inequal. Pure Appl. Math. (2001) 2 1-16].
Examples are given for the binary case but the full utility is for general
multistate coherent systems and a comprehensive example is given.
|
math
|
3,165 |
Optimal change-point estimation from indirect observations
|
math.ST
|
We study nonparametric change-point estimation from indirect noisy
observations. Focusing on the white noise convolution model, we consider two
classes of functions that are smooth apart from the change-point. We establish
lower bounds on the minimax risk in estimating the change-point and develop
rate optimal estimation procedures. The results demonstrate that the best
achievable rates of convergence are determined both by smoothness of the
function away from the change-point and by the degree of ill-posedness of the
convolution operator. Optimality is obtained by introducing a new technique
that involves, as a key element, detection of zero crossings of an estimate of
the properly smoothed second derivative of the underlying function.
|
math
|
3,166 |
Discussion on Benford's Law and its Application
|
math.ST
|
The probability that a number in many naturally occurring tables of numerical
data has first significant digit $d$ is predicted by Benford's Law ${\rm Prob}
(d) = \log_{10} (1 + {\displaystyle{1\over d}}), d = 1, 2 >..., 9$.
Illustrations of Benford's Law from both theoretical and real-life sources on
both science and social science areas are shown in detail with some novel ideas
and generalizations developed solely by the authors of this paper. Three tests,
Chi-Square test, total variation distance, and maximum deviations are adopted
to examine the fitness of the datasets to Benford's distribution. Finally,
applications of Benford's Law are summarized and explored to reveal the power
of this mathematical principle.
|
math
|
3,167 |
Some improvements in numerical evaluation of symmetric stable density and its derivatives
|
math.ST
|
We propose improvements in numerical evaluation of symmetric stable density
and its partial derivatives with respect to the parameters. They are useful for
more reliable evaluation of maximum likelihood estimator and its standard
error. Numerical values of the Fisher information matrix of symmetric stable
distributions are also given. Our improvements consist of modification of the
method of Nolan (1997) for the boundary cases, i.e., in the tail and mode of
the densities and in the neighborhood of the Cauchy and the normal
distributions.
|
math
|
3,168 |
Mimicking counterfactual outcomes to estimate causal effects
|
math.ST
|
In observational studies, treatment may be adapted to covariates at several
times without a fixed protocol, in continuous time. Treatment influences
covariates, which influence treatment, which influences covariates, and so on.
Then even time-dependent Cox-models cannot be used to estimate the net
treatment effect. Structural nested models have been applied in this setting.
Structural nested models are based on counterfactuals: the outcome a person
would have had had treatment been withheld after a certain time. Previous work
on continuous-time structural nested models assumes that counterfactuals depend
deterministically on observed data, while conjecturing that this assumption can
be relaxed. This article proves that one can mimic counterfactuals by
constructing random variables, solutions to a differential equation, that have
the same distribution as the counterfactuals, even given past observed data.
These "mimicking" variables can be used to estimate the parameters of
structural nested models without assuming the treatment effect to be
deterministic.
|
math
|
3,169 |
Estimating the causal effect of a time-varying treatment on time-to-event using structural nested failure time models
|
math.ST
|
In this paper we review an approach to estimating the causal effect of a
time-varying treatment on time to some event of interest. This approach is
designed for the situation where the treatment may have been repeatedly adapted
to patient characteristics, which themselves may also be time-dependent. In
this situation the effect of the treatment cannot simply be estimated by
conditioning on the patient characteristics, as these may themselves be
indicators of the treatment effect. This so-called time-dependent confounding
is typical in observational studies. We discuss a new class of failure time
models, structural nested failure time models, which can be used to estimate
the causal effect of a time-varying treatment, and present methods for
estimating and testing the parameters of these models.
|
math
|
3,170 |
Estimating marginal survival function by adjusting for dependent censoring using many covariates
|
math.ST
|
One goal in survival analysis of right-censored data is to estimate the
marginal survival function in the presence of dependent censoring. When many
auxiliary covariates are sufficient to explain the dependent censoring,
estimation based on either a semiparametric model or a nonparametric model of
the conditional survival function can be problematic due to the high
dimensionality of the auxiliary information. In this paper, we use two working
models to condense these high-dimensional covariates in dimension reduction;
then an estimate of the marginal survival function can be derived
nonparametrically in a low-dimensional space. We show that such an estimator
has the following double robust property: when either working model is correct,
the estimator is consistent and asymptotically Gaussian; when both working
models are correct, the asymptotic variance attains the efficiency bound.
|
math
|
3,171 |
Strong consistency of MLE for finite uniform mixtures when the scale parameters are exponentially small
|
math.ST
|
We consider maximum likelihood estimation of finite mixture of uniform
distributions. We prove that maximum likelihood estimator is strongly
consistent, if the scale parameters of the component uniform distributions are
restricted from below by exp(-n^d), 0 < d < 1, where n is the sample size.
|
math
|
3,172 |
Causal Inference for Complex Longitudinal Data: The Continuous Time g-Computation Formula
|
math.ST
|
I write out and discuss how one might try to prove the continuous time
g-computation formula, in the simplest possible case: treatments (labelled a,
for actions) and covariates (l: longitudinal data) form together a bivariate
counting process. This formula is an important missing ingredient in the
continuous time version of J.M. Robins' counterfactual based theory of causal
inference for complex longitudinal data
|
math
|
3,173 |
Sharp optimality for density deconvolution with dominating bias
|
math.ST
|
We consider estimation of the common probability density $f$ of i.i.d. random
variables $X_i$ that are observed with an additive i.i.d. noise. We assume that
the unknown density $f$ belongs to a class $\mathcal{A}$ of densities whose
characteristic function is described by the exponent $\exp(-\alpha |u|^r)$ as
$|u|\to \infty$, where $\alpha >0$, $r>0$. The noise density is supposed to be
known and such that its characteristic function decays as $\exp(-\beta |u|^s)$,
as $|u| \to \infty$, where $\beta >0$, $s>0$. Assuming that $r<s$, we suggest a
kernel type estimator that is optimal in sharp asymptotical minimax sense on
$\mathcal{A}$ simultaneously under the pointwise and the $\mathbb{L}_2$-risks.
The variance of the estimators turns out to be asymptotically negligible w.r.t.
its squared bias. For $r<s/2$ we construct a sharp adaptive estimator of $f$.
We discuss some effects of dominating bias, such as superefficiency of minimax
estimators.
|
math
|
3,174 |
Densities, spectral densities and modality
|
math.ST
|
This paper considers the problem of specifying a simple approximating density
function for a given data set (x_1,...,x_n). Simplicity is measured by the
number of modes but several different definitions of approximation are
introduced. The taut string method is used to control the numbers of modes and
to produce candidate approximating densities. Refinements are introduced that
improve the local adaptivity of the procedures and the method is extended to
spectral densities.
|
math
|
3,175 |
Higher criticism for detecting sparse heterogeneous mixtures
|
math.ST
|
Higher criticism, or second-level significance testing, is a
multiple-comparisons concept mentioned in passing by Tukey. It concerns a
situation where there are many independent tests of significance and one is
interested in rejecting the joint null hypothesis. Tukey suggested comparing
the fraction of observed significances at a given \alpha-level to the expected
fraction under the joint null. In fact, he suggested standardizing the
difference of the two quantities and forming a z-score; the resulting z-score
tests the significance of the body of significance tests. We consider a
generalization, where we maximize this z-score over a range of significance
levels 0<\alpha\leq\alpha_0.
We are able to show that the resulting higher criticism statistic is
effective at resolving a very subtle testing problem: testing whether n normal
means are all zero versus the alternative that a small fraction is nonzero. The
subtlety of this ``sparse normal means'' testing problem can be seen from work
of Ingster and Jin, who studied such problems in great detail. In their
studies, they identified an interesting range of cases where the small fraction
of nonzero means is so small that the alternative hypothesis exhibits little
noticeable effect on the distribution of the p-values either for the bulk of
the tests or for the few most highly significant tests.
In this range, when the amplitude of nonzero means is calibrated with the
fraction of nonzero means, the likelihood ratio test for a precisely specified
alternative would still succeed in separating the two hypotheses.
|
math
|
3,176 |
Breakdown points for maximum likelihood estimators of location-scale mixtures
|
math.ST
|
ML-estimation based on mixtures of Normal distributions is a widely used tool
for cluster analysis. However, a single outlier can make the parameter
estimation of at least one of the mixture components break down. Among others,
the estimation of mixtures of t-distributions by McLachlan and
Peel [Finite Mixture Models (2000) Wiley, New York] and the addition of a
further mixture component accounting for ``noise'' by Fraley and Raftery
[The Computer J. 41 (1998) 578-588] were suggested as more robust
alternatives.
In this paper, the definition of an adequate robustness measure for cluster
analysis is discussed and bounds for the breakdown points of the mentioned
methods are given. It turns out that the two alternatives, while adding
stability in the presence of outliers of moderate size, do not possess a
substantially better breakdown behavior than estimation based on Normal
mixtures. If the number of clusters s is treated as fixed, r additional points
suffice for all three methods to let the parameters of r clusters explode. Only
in the case of r=s is this not possible for t-mixtures. The ability to estimate
the number of mixture components, for example, by use of the Bayesian
information criterion of Schwarz [Ann. Statist. 6 (1978)
461-464], and to isolate gross outliers as clusters of one point, is crucial
for an improved breakdown behavior of all three techniques. Furthermore, a
mixture of Normals with an improper uniform distribution is proposed to achieve
more robustness in the case of a fixed number of components.
|
math
|
3,177 |
Asymptotic global robustness in bayesian decision theory
|
math.ST
|
In Bayesian decision theory, it is known that robustness with respect to the
loss and the prior can be improved by adding new observations. In this article
we study the rate of robustness improvement with respect to the number of
observations n. Three usual measures of posterior global robustness are
considered: the (range of the) Bayes actions set derived from a class of loss
functions, the maximum regret of using a particular loss when the subjective
loss belongs to a given class and the range of the posterior expected loss when
the loss function ranges over a class. We show that the rate of convergence of
the first measure of robustness is \sqrtn, while it is n for the other measures
under reasonable assumptions on the class of loss functions. We begin with the
study of two particular cases to illustrate our results.
|
math
|
3,178 |
Game theory, maximum entropy, minimum discrepancy and robust Bayesian decision theory
|
math.ST
|
We describe and develop a close relationship between two problems that have
customarily been regarded as distinct: that of maximizing entropy, and that of
minimizing worst-case expected loss. Using a formulation grounded in the
equilibrium theory of zero-sum games between Decision Maker and
Nature, these two problems are shown to be dual to each other, the solution
to each providing that to the other. Although Tops\oe described this connection
for the Shannon entropy over 20 years ago, it does not appear to be widely
known even in that important special case. We here generalize this theory to
apply to arbitrary decision problems and loss functions. We indicate how an
appropriate generalized definition of entropy can be associated with such a
problem, and we show that, subject to certain regularity conditions, the
above-mentioned duality continues to apply in this extended context.
This simultaneously provides a possible rationale for maximizing entropy and
a tool for finding robust Bayes acts. We also describe the essential identity
between the problem of maximizing entropy and that of minimizing a related
discrepancy or divergence between distributions. This leads to an extension, to
arbitrary discrepancies, of a well-known minimax theorem for the case of
Kullback-Leibler divergence (the ``redundancy-capacity theorem'' of information
theory). For the important case of families of distributions having certain
mean values specified, we develop simple sufficient conditions and methods for
identifying the desired solutions.
|
math
|
3,179 |
Uniform asymptotics for robust location estimates when the scale is unknown
|
math.ST
|
Most asymptotic results for robust estimates rely on regularity conditions
that are difficult to verify in practice. Moreover, these results apply to
fixed distribution functions. In the robustness context the distribution of the
data remains largely unspecified and hence results that hold uniformly over a
set of possible distribution functions are of theoretical and practical
interest. Also, it is desirable to be able to determine the size of the set of
distribution functions where the uniform properties hold. In this paper we
study the problem of obtaining verifiable regularity conditions that suffice to
yield uniform consistency and uniform asymptotic normality for location robust
estimates when the scale of the errors is unknown.
We study M-location estimates calculated with an S-scale and we obtain
uniform asymptotic results over contamination neighborhoods. Moreover, we show
how to calculate the maximum size of the contamination neighborhoods where
these uniform results hold. There is a trade-off between the size of these
neighborhoods and the breakdown point of the scale estimate.
|
math
|
3,180 |
Robust Inference for Univariate Proportional Hazards Frailty Regression Models
|
math.ST
|
We consider a class of semiparametric regression models which are
one-parameter extensions of the Cox [J. Roy. Statist. Soc. Ser. B 34 (1972)
187-220] model for right-censored univariate failure times. These models assume
that the hazard given the covariates and a random frailty unique to each
individual has the proportional hazards form multiplied by the frailty.
The frailty is assumed to have mean 1 within a known one-parameter family of
distributions. Inference is based on a nonparametric likelihood. The behavior
of the likelihood maximizer is studied under general conditions where the
fitted model may be misspecified. The joint estimator of the regression and
frailty parameters as well as the baseline hazard is shown to be uniformly
consistent for the pseudo-value maximizing the asymptotic limit of the
likelihood. Appropriately standardized, the estimator converges weakly to a
Gaussian process. When the model is correctly specified, the procedure is
semiparametric efficient, achieving the semiparametric information bound for
all parameter components. It is also proved that the bootstrap gives valid
inferences for all parameters, even under misspecification.
We demonstrate analytically the importance of the robust inference in several
examples. In a randomized clinical trial, a valid test of the treatment effect
is possible when other prognostic factors and the frailty distribution are both
misspecified. Under certain conditions on the covariates, the ratios of the
regression parameters are still identifiable. The practical utility of the
procedure is illustrated on a non-Hodgkin's lymphoma dataset.
|
math
|
3,181 |
A Bernstein-von Mises theorem in the nonparametric right-censoring model
|
math.ST
|
In the recent Bayesian nonparametric literature, many examples have been
reported in which Bayesian estimators and posterior distributions do not
achieve the optimal convergence rate, indicating that the Bernstein-von
Mises theorem does not hold. In this article, we give a positive result in
this direction by showing that the Bernstein-von Mises theorem holds in
survival models for a large class of prior processes neutral to the right. We
also show that, for an arbitrarily given convergence rate n^{-\alpha} with
0<\alpha \leq 1/2, a prior process neutral to the right can be chosen so that
its posterior distribution achieves the convergence rate n^{-\alpha}.
|
math
|
3,182 |
Statistical estimation in the proportional hazards model with risk set sampling
|
math.ST
|
Thomas' partial likelihood estimator of regression parameters is widely used
in the analysis of nested case-control data with Cox's model. This paper
proposes a new estimator of the regression parameters, which is consistent and
asymptotically normal. Its asymptotic variance is smaller than that of Thomas'
estimator away from the null. Unlike some other existing estimators, the
proposed estimator does not rely on any more data than strictly necessary for
Thomas' estimator and is easily computable from a closed form estimating
equation with a unique solution. The variance estimation is obtained as minus
the inverse of the derivative of the estimating function and therefore the
inference is easily available. A numerical example is provided in support of
the theory.
|
math
|
3,183 |
Convergence rates for posterior distributions and adaptive estimation
|
math.ST
|
The goal of this paper is to provide theorems on convergence rates of
posterior distributions that can be applied to obtain good convergence rates in
the context of density estimation as well as regression. We show how to choose
priors so that the posterior distributions converge at the optimal rate without
prior knowledge of the degree of smoothness of the density function or the
regression function to be estimated.
|
math
|
3,184 |
Needles and straw in haystacks: Empirical Bayes estimates of possibly sparse sequences
|
math.ST
|
An empirical Bayes approach to the estimation of possibly sparse sequences
observed in Gaussian white noise is set out and investigated. The prior
considered is a mixture of an atom of probability at zero and a heavy-tailed
density \gamma, with the mixing weight chosen by marginal maximum likelihood,
in the hope of adapting between sparse and dense sequences. If estimation is
then carried out using the posterior median, this is a random thresholding
procedure. Other thresholding rules employing the same threshold can also be
used. Probability bounds on the threshold chosen by the marginal maximum
likelihood approach lead to overall risk bounds over classes of signal
sequences of length n, allowing for sparsity of various kinds and degrees.
The signal classes considered are ``nearly black'' sequences where only a
proportion \eta is allowed to be nonzero, and sequences with normalized \ell_p
norm bounded by \eta, for \eta >0 and 0<p\le 2. Estimation error is measured by
mean qth power loss, for 0<q\le 2. For all the classes considered, and for all
q in (0,2], the method achieves the optimal estimation rate as n\to \infty and
\eta \to 0 at various rates, and in this sense adapts automatically to the
sparseness or otherwise of the underlying signal. In addition the risk is
uniformly bounded over all signals. If the posterior mean is used as the
estimator, the results still hold for q>1. Simulations show excellent
performance.
|
math
|
3,185 |
Optimality of neighbor-balanced designs for total effects
|
math.ST
|
The purpose of this paper is to study optimality of circular
neighbor-balanced block designs when neighbor effects are present in the model.
In the literature many optimality results are established for direct effects
and neighbor effects separately, but few for total effects, that is, the sum of
direct effect of treatment and relevant neighbor effects. We show that circular
neighbor-balanced designs are universally optimal for total effects among
designs with no self neighbor. Then we give efficiency factors of these
designs, and show some situations where a design with self neighbors is
preferable to a neighbor-balanced design.
|
math
|
3,186 |
Construction of E(s^2)-optimal supersaturated designs
|
math.ST
|
Booth and Cox proposed the E(s^2) criterion for constructing two-level
supersaturated designs. Nguyen [Technometrics 38 (1996) 69-73] and Tang and Wu
[Canad. J. Statist 25 (1997) 191-201] independently derived a lower bound for
E(s^2). This lower bound can be achieved only when m is a multiple of N-1,
where m is the number of factors and N is the run size. We present a method
that uses difference families to construct designs that satisfy this lower
bound. We also derive better lower bounds for the case where the Nguyen-Tang-Wu
bound is not achievable. Our bounds cover more cases than a bound recently
obtained by Butler, Mead, Eskridge and Gilmour [J.
R. Stat. Soc. Ser. B Stat. Methodol. 63 (2001) 621-632]. New E(s^2)-optimal
designs are obtained by using a computer to search for designs that achieve the
improved bounds.
|
math
|
3,187 |
Complexity regularization via localized random penalties
|
math.ST
|
In this article, model selection via penalized empirical loss minimization in
nonparametric classification problems is studied. Data-dependent penalties are
constructed, which are based on estimates of the complexity of a small subclass
of each model class, containing only those functions with small empirical loss.
The penalties are novel since those considered in the literature are typically
based on the entire model class. Oracle inequalities using these penalties are
established, and the advantage of the new penalties over those based on the
complexity of the whole model class is demonstrated.
|
math
|
3,188 |
Generalization bounds for averaged classifiers
|
math.ST
|
We study a simple learning algorithm for binary classification. Instead of
predicting with the best hypothesis in the hypothesis class, that is, the
hypothesis that minimizes the training error, our algorithm predicts with a
weighted average of all hypotheses, weighted exponentially with respect to
their training error. We show that the prediction of this algorithm is much
more stable than the prediction of an algorithm that predicts with the best
hypothesis. By allowing the algorithm to abstain from predicting on some
examples, we show that the predictions it makes when it does not abstain are
very reliable. Finally, we show that the probability that the algorithm
abstains is comparable to the generalization error of the best hypothesis in
the class.
|
math
|
3,189 |
Statistical properties of the method of regularization with periodic Gaussian reproducing kernel
|
math.ST
|
The method of regularization with the Gaussian reproducing kernel is popular
in the machine learning literature and successful in many practical
applications.
In this paper we consider the periodic version of the Gaussian kernel
regularization.
We show in the white noise model setting, that in function spaces of very
smooth functions, such as the infinite-order Sobolev space and the space of
analytic functions, the method under consideration is asymptotically minimax;
in finite-order Sobolev spaces, the method is rate optimal, and the efficiency
in terms of constant when compared with the minimax estimator is reasonably
high. The smoothing parameters in the periodic Gaussian regularization can be
chosen adaptively without loss of asymptotic efficiency. The results derived in
this paper give a partial explanation of the success of the
Gaussian reproducing kernel in practice. Simulations are carried out to study
the finite sample properties of the periodic Gaussian regularization.
|
math
|
3,190 |
Simultaneous prediction of independent Poisson observables
|
math.ST
|
Simultaneous predictive distributions for independent Poisson observables are
investigated. A class of improper prior distributions for Poisson means is
introduced. The Bayesian predictive distributions based on priors from the
introduced class are shown to be admissible under the Kullback-Leibler loss. A
Bayesian predictive distribution based on a prior in this class dominates the
Bayesian predictive distribution based on the Jeffreys prior.
|
math
|
3,191 |
Maximum Fisher information in mixed state quantum systems
|
math.ST
|
We deal with the maximization of classical Fisher information in a quantum
system depending on an unknown parameter. This problem has been raised by
physicists, who defined [Helstrom (1967) Phys. Lett. A 25 101-102] a quantum
counterpart of classical Fisher information, which has been found to constitute
an upper bound for classical information itself [Braunstein and Caves (1994)
Phys. Rev. Lett. 72 3439-3443]. It has then become of relevant interest among
statisticians, who investigated the relations between classical and quantum
information and derived a condition for equality in the particular case of
two-dimensional pure state systems [Barndorff-Nielsen and Gill (2000) J. Phys.
A 33 4481-4490]. In this paper we show that this condition holds even in the
more general setting of two-dimensional mixed state systems. We also derive the
expression of the maximum Fisher information achievable and its relation with
that attainable in pure states.
|
math
|
3,192 |
Aggregation for Regression Learning
|
math.ST
|
This paper studies statistical aggregation procedures in regression setting.
A motivating factor is the existence of many different methods of estimation,
leading to possibly competing estimators.
We consider here three different types of aggregation: model selection (MS)
aggregation, convex (C) aggregation and linear (L) aggregation. The objective
of (MS) is to select the optimal single estimator from the list; that of (C) is
to select the optimal convex combination of the given estimators; and that of
(L) is to select the optimal linear combination of the given estimators. We are
interested in evaluating the rates of convergence of the excess risks of the
estimators obtained by these procedures. Our approach is motivated by recent
minimax results in Nemirovski (2000) and Tsybakov (2003).
There exist competing aggregation procedures achieving optimal convergence
separately for each one of (MS), (C) and (L) cases. Since the bounds in these
results are not directly comparable with each other, we suggest an alternative
solution. We prove that all the three optimal bounds can be nearly achieved via
a single "universal" aggregation procedure. We propose such a procedure which
consists in mixing of the initial estimators with the weights obtained by
penalized least squares. Two different penalities are considered: one of them
is related to hard thresholding techniques, the second one is a data dependent
$L_1$-type penalty. Consequently, our method can be endorsed by both the
proponents of model selection and the advocates of model averaging.
|
math
|
3,193 |
Statistical modeling of causal effects in continuous time
|
math.ST
|
This article studies the estimation of the causal effect of a time-varying
treatment on time-to-an-event or on some other continuously distributed
outcome. The paper applies to the situation where treatment is repeatedly
adapted to time-dependent patient characteristics. The treatment effect cannot
be estimated by simply conditioning on these time-dependent patient
characteristics, as they may themselves be indications of the treatment effect.
This time-dependent confounding is common in observational studies. Robins
[(1992) Biometrika 79 321--334, (1998b) Encyclopedia of Biostatistics 6
4372--4389] has proposed the so-called structural nested models to estimate
treatment effects in the presence of time-dependent confounding. In this
article we provide a conceptual framework and formalization for structural
nested models in continuous time. We show that the resulting estimators are
consistent and asymptotically normal. Moreover, as conjectured in Robins
[(1998b) Encyclopedia of Biostatistics 6 4372--4389], a test for whether
treatment affects the outcome of interest can be performed without specifying a
model for treatment effect. We illustrate the ideas in this article with an
example.
|
math
|
3,194 |
An introduction to (smoothing spline) ANOVA models in RKHS with examples in geographical data, medicine, atmospheric science and machine learning
|
math.ST
|
Smoothing Spline ANOVA (SS-ANOVA) models in reproducing kernel Hilbert spaces
(RKHS) provide a very general framework for data analysis, modeling and
learning in a variety of fields. Discrete, noisy scattered, direct and indirect
observations can be accommodated with multiple inputs and multiple possibly
correlated outputs and a variety of meaningful structures. The purpose of this
paper is to give a brief overview of the approach and describe and contrast a
series of applications, while noting some recent results.
|
math
|
3,195 |
You Can Fool Some People Sometimes
|
math.ST
|
We develop an empirical procedure to qunatify future company performance
based on top management promises. We find that the number of future tense
sentence occurrences in 10-K reports is significantly negatively correlated
with the return as well as with the excess return on the company stock price.
We extrapolate the same methodology to US presidential campaigns since 1960 and
come to some startling conclusions.
|
math
|
3,196 |
A hierarchical technique for estimating location parameter in the presence of missing data
|
math.ST
|
This paper proposes a hierarchical method for estimating the location
parameters of a multivariate vector in the presence of missing data. At i th
step of this procedure an estimate of the location parameters for non-missing
components of the vector is based on combining the information in the subset of
observations with the non-missing components with updated estimates of the
location parameters from all subsets with even more missing components in an
iterative fashion. If the variance-covariance matrix is known, then the
resulting estimator is unbiased with the smallest variance provided missing
data are ignorable. It is also shown that the resulting estimator based on
consistent estimators of variance-covariance matrices obtains unbiasedness and
the smallest variance asymptotically. This approach can also be extended to
some cases of non-ignorable missing data. Applying the methodology to a data
with random dropouts yields the well known Kaplan-Meier estimator.
|
math
|
3,197 |
Dynamics of Interest Rate Curve by Functional Auto-Regression
|
math.ST
|
The paper uses functional auto-regression to predict the dynamics of interest
rate curve. It estimates the auto-regressive operator by extending methods of
the reduced-rank auto-regression to the functional data. Such an estimation
technique is better suited for prediction purposes as opposed to the methods
based either on principal components or canonical correlations. The consistency
of the estimator is proved using methods of operator theory. The estimation
method is used to analyze dynamics of Eurodollar futures rates. The results
suggest that future movements of interest rates are predictable at 1-year
horizons.
|
math
|
3,198 |
Average treatment effect estimation via random recursive partitioning
|
math.ST
|
A new matching method is proposed for the estimation of the average treatment
effect of social policy interventions (e.g., training programs or health care
measures). Given an outcome variable, a treatment and a set of pre-treatment
covariates, the method is based on the examination of random recursive
partitions of the space of covariates using regression trees. A regression tree
is grown either on the treated or on the untreated individuals {\it only} using
as response variable a random permutation of the indexes 1...$n$ ($n$ being the
number of units involved), while the indexes for the other group are predicted
using this tree. The procedure is replicated in order to rule out the effect of
specific permutations. The average treatment effect is estimated in each tree
by matching treated and untreated in the same terminal nodes. The final
estimator of the average treatment effect is obtained by averaging on all the
trees grown. The method does not require any specific model assumption apart
from the tree's complexity, which does not affect the estimator though. We show
that this method is either an instrument to check whether two samples can be
matched (by any method) and, when this is feasible, to obtain reliable
estimates of the average treatment effect. We further propose a graphical tool
to inspect the quality of the match. The method has been applied to the
National Supported Work Demonstration data, previously analyzed by Lalonde
(1986) and others.
|
math
|
3,199 |
Limiting Behaviour of the Mean Residual Life
|
math.ST
|
In survival or reliability studies, the mean residual life or life expectancy
is an important characteristic of the model. Here, we study the limiting
behaviour of the mean residual life, and derive an asymptotic expansion which
can be used to obtain a good approximation for large values of the time
variable. The asymptotic expansion is valid for a quite general class of
failure rate distributions--perhaps the largest class that can be expected
given that the terms depend only on the failure rate and its derivatives.
|
math
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.