Authors:
Simon J Godsill,
Christophe Andrieu,
Page (NA) Paper number 2223
Abstract:
In this paper we address the problem of the separation and recovery
of convolutively mixed autoregressive processes in a Bayesian framework.
Solving this problem requires the ability to solve integration and/or
optimization problems of complicated posterior distributions. We thus
propose efficient stochastic algorithms based on Markov chain Monte
Carlo (MCMC) methods. We present three algorithms. The first one is
a classical Gibbs sampler that generates samples from the posterior
distribution. The two other algorithms are stochastic optimization
algorithms that allow to optimize either the marginal distribution
of the source s, or the marginal distribution of the parameters of
the sources and mixing filters, conditional upon the observation. Simulations
are presented.
Authors:
Petar M Djurić,
Joon-Hwa Chun,
Page (NA) Paper number 2255
Abstract:
Hidden Markov models are very important for analysis of signals and
systems. In the past two decades they attracted the attention of the
speech processing community, and recently they have become the favorite
models of biologists. Major weakness of conventional hidden Markov
models is their inflexibility in modeling state duration. In this paper,
we analyze nonstationary hidden Markov models whose state transition
probabilities are functions of time, thereby indirectly modeling state
durations by a given probability mass function. The objective of our
work is to estimate all the unknowns of the nonstationary hidden Markov
model ,its parameters and state sequence. To that end, we construct
a Markov chain Monte Carlo sampling scheme in which all the posterior
probability distributions of the unknowns are easy to sample from.
Extensive simulation results show that the estimation procedure yields
excellent results.
Authors:
Robert D Nowak,
Eric D Kolaczyk,
Page (NA) Paper number 1978
Abstract:
This paper describes a maximum a posteriori (MAP) estimation method
for linear inverse problems involving Poisson data based on a novel
multiscale framework. The framework itself is founded on a carefully
designed multiscale prior probability distribution placed on the ``splits''
in the multiscale partition of the underlying intensity, and it admits
a remarkably simple MAP estimation procedure using an expectation-maximization
(EM) algorithm. Unlike many other approaches to this problem, the EM
update equations for our algorithm have simple, closed-form expressions.
Additionally, our class of priors has the interesting feature that
the ``non-informative'' member yields the traditional maximum likelihood
solution; other choices are made to reflect prior belief as to the
smoothness of the unknown intensity.
Authors:
Rangasami L Kashyap,
Srinivas Sista,
Page (NA) Paper number 2196
Abstract:
We have given a solution to the problem of unsupervised classification
of multidimensional data. Our approach is based on Bayesian estimation
which regards the number of classes, the data partition and the parameter
vectors that describe the density of classes as unknowns. We compute
their MAP estimates simultaneously by maximizing their joint posterior
probability density given the data. The concept of partition as a variable
to be estimated is a unique feature of our method. This formulation
also solves the problem of validating clusters obtained from various
methods. Our method can also incorporate any additional information
about a class while assigning its probability density. It can also
utilize any available training samples that arise from different classes.
We provide a descent algorithm that starts with an arbitrary partition
of the data and iteratively computes the MAP estimates. The proposed
method is applied to target tracking data. The results obtained demonstrate
the power of Bayesian approach for unsupervised classification.
Authors:
Matthew E Brand,
Page (NA) Paper number 1802
Abstract:
We develop a computationally efficient framework for finding compact
and highly accurate hidden-variable models via entropy minimization.
The main results are: 1) An entropic prior that favors small, unambiguous,
maximally structured models. 2) A prior-balancing manipulation of Bayes'
rule that allows one to gradually introduce or remove constraints in
the course of iterative re-estimation. #1 and #2 combined give the
information-theoretic Helmholtz free energy of the model and the means
to manipulate it. 3) Maximum a posteriori (MAP) estimators such that
entropy optimization and deterministic annealing can be performed wholly
within expectation-maximization (EM). 4) Trimming tests that identify
excess parameters whose removal will increase the posterior, thereby
simplifying the model and preventing over-fitting. The end result is
a fast and exact hill-climbing algorithm that mixes continuous and
combinatoric optimization and evades sub-optimal equilibria.
Authors:
Christian P Robert, Statistical Laboratory, CREST, INSEE, France (France)
Arnaud Doucet,
Simon J Godsill,
Page (NA) Paper number 1916
Abstract:
Markov chain Monte Carlo (MCMC) methods are powerful simulation-based
techniques for sampling from high-dimensional and/or non-standard probability
distributions. These methods have recently become very popular in the
statistical and signal processing communities as they allow highly
complex inference problems in detection and estimation to be addressed.
However, MCMC is not currently well adapted to the problem of marginal
maximum a posteriori (MMAP) estimation. In this paper, we present a
simple and novel MCMC strategy, called State Augmentation for Marginal
Estimation (SAME), that allows MMAP estimates to be obtained for Bayesian
models. The methodology is very general and we illustrate the simplicity
and utility of the approach by examples in MAP parameter estimation
for Hidden Markov models (HMMs) and for missing data interpolation
in autoregressive time series.
Authors:
Jayesh H Kotecha,
Petar M Djurić,
Page (NA) Paper number 2263
Abstract:
In many Monte Carlo simulations, it is important to generate samples
from given densities. Recently, researchers in statistical signal processing
and related disciplines have shown increased interest for a generator
of random vectors with truncated multivariate normal probability density
functions (pdf's). A straightforward method for their generation is
to draw samples from the multivariate normal density and reject the
ones that are outside the acceptance region. This method, which is
known as rejection sampling, can be very inefficient, especially for
high dimensions and/or relatively small supports of the random vectors.
In this paper we propose an approach for generation of vectors with
truncated Gaussian densities based on Gibbs sampling which is simple
to use and does not reject any of the generated vectors.
Authors:
Joseph Tabrikian, Duke University (U.K.)
Jeffrey L Krolik, Duke University (U.K.)
Page (NA) Paper number 2367
Abstract:
This paper presents a novel method for calculating the Hybrid Cramer-Rao
lower bound (HCRLB) when the statistical model for the data has a Markovian
nature. The method applies to both the non-linear/non-Gaussian as well
as linear/Gaussian model. The approach solves the required expectation
over unknown random parameters by several one-dimensional integrals
computed recursively, thus simplifying a computationally-intensive
multi-dimensional integration. The method is applied to the problem
of refractivity estimation using radar clutter from the sea surface,
where the backscatter cross section is assumed to be a Markov process
in range. The HCRLB is evaluated and compared to the performance of
the corresponding maximum a-posteriori estimator. Simulation results
indicate that the HCRLB provides a tight lower bound in this application.
|