Home
 Mirror Sites
 General Information
 Confernce Schedule
 Technical Program
 Tutorials
 Industry Technology Tracks
 Exhibits
 Sponsors
 Registration
 Coming to Phoenix
 Call for Papers
 Author's Kit
 On-line Review
 Future Conferences
 Help
|
Abstract: Session SPTM-17 |
|
SPTM-17.1
|
Bayesian Separation and Recovery of Convolutively Mixed Autoregressive Sources
Simon J Godsill,
Christophe Andrieu (Engineering Department University of Cambridge)
In this paper we address the problem of the separation
and recovery of convolutively mixed autoregressive
processes in a Bayesian framework. Solving this
problem requires the ability to solve integration
and/or optimization problems of complicated posterior
distributions. We thus propose efficient stochastic
algorithms based on Markov chain Monte Carlo (MCMC)
methods. We present three algorithms. The first one is
a classical Gibbs sampler that generates samples from
the posterior distribution. The two other algorithms
are stochastic optimization algorithms that allow to
optimize either the marginal distribution of the source
s, or the marginal distribution of the parameters of
the sources and mixing filters, conditional upon the
observation. Simulations are presented.
|
SPTM-17.2
|
Estimation of Nonstationary Hidden Markov Models by MCMC sampling
Petar M Djuric,
Joon-Hwa Chun (school)
Hidden Markov models are very important for analysis
of signals and systems. In the past two decades they
attracted the attention of the speech processing
community, and recently they have become the favorite
models of biologists. Major weakness of conventional
hidden Markov models is their inflexibility in
modeling state duration.
In this paper, we analyze nonstationary hidden Markov
models whose state transition probabilities are
functions of time, thereby indirectly modeling
state durations by a given probability mass function.
The objective of our work is to estimate all the
unknowns of the nonstationary hidden Markov model
,its parameters and state sequence. To that end,
we construct a Markov chain Monte Carlo
sampling scheme in which all the posterior
probability distributions of the unknowns are easy
to sample from.
Extensive simulation results show that the
estimation procedure yields excellent results.
|
SPTM-17.3
|
A Bayesian Multiscale Framework for Poisson Inverse Problems
Robert D Nowak (ECE Department, Michigan State University, East Lansing, MI 48824-1226),
Eric D Kolaczyk (Department of Math & Stat, Boston University, Boston, MA 02215)
This paper describes a maximum a posteriori (MAP) estimation
method for linear inverse problems involving Poisson data based on a
novel multiscale framework. The framework itself is founded on a
carefully designed multiscale prior probability distribution placed on
the ``splits'' in the multiscale partition of the underlying
intensity, and it admits a remarkably simple MAP estimation procedure
using an expectation-maximization (EM) algorithm. Unlike many other
approaches to this problem, the EM update equations for our algorithm
have simple, closed-form expressions. Additionally, our class of
priors has the interesting feature that the ``non-informative'' member
yields the traditional maximum likelihood solution; other choices are
made to reflect prior belief as to the smoothness of the unknown
intensity.
|
SPTM-17.4
|
Bayesian Framework for Unsupervised Classification with Application to Target Tracking
Rangasami L Kashyap,
Srinivas Sista (Purdue University)
We have given a solution to the problem of unsupervised classification of
multidimensional data.
Our approach is based on Bayesian estimation which regards the number of
classes, the data partition and the parameter vectors that describe the
density of classes as unknowns.
We compute their MAP estimates simultaneously by maximizing their joint
posterior probability density given the data.
The concept of partition as a variable to be estimated is a unique feature
of our method.
This formulation also solves the problem of validating clusters obtained
from various methods.
Our method can also incorporate any additional information about a class
while assigning its probability density. It can also utilize any available
training samples that arise from different classes.
We provide a descent algorithm that starts with an arbitrary
partition of the data and iteratively computes the MAP estimates.
The proposed method is applied to target tracking data.
The results obtained demonstrate the power of Bayesian approach for
unsupervised classification.
|
SPTM-17.5
|
Structure and parameter learning via entropy minimization, with applications to mixture and hidden Markov models
Matthew E Brand (MERL -- a Mitsubishi Electric Research Lab)
We develop a computationally efficient framework for
finding compact and highly accurate hidden-variable
models via entropy minimization. The main results
are: 1) An entropic prior that favors small,
unambiguous, maximally structured models. 2) A
prior-balancing manipulation of Bayes' rule that
allows one to gradually introduce or remove
constraints in the course of iterative re-estimation.
#1 and #2 combined give the information-theoretic Helmholtz free energy of
the model and the means to manipulate it. 3) Maximum
a posteriori (MAP) estimators such that entropy
optimization and deterministic annealing can be
performed wholly within expectation-maximization (EM).
4) Trimming tests that identify excess parameters
whose removal will increase the posterior, thereby
simplifying the model and preventing over-fitting.
The end result is a fast and exact hill-climbing
algorithm that mixes continuous and combinatoric
optimization and evades sub-optimal equilibria.
|
SPTM-17.6
|
Marginal MAP estimation using Markov chain Monte Carlo
Christian P Robert (Statistical Laboratory, CREST, INSEE, France),
Arnaud Doucet,
Simon J Godsill (Signal Processing Group, University of Cambridge)
Markov chain Monte Carlo (MCMC) methods are powerful
simulation-based techniques for sampling from high-dimensional and/or
non-standard probability distributions. These methods have recently become
very popular in the statistical and signal processing communities as they
allow highly complex inference problems in detection and estimation to be
addressed. However, MCMC is not currently well adapted to the problem of
marginal maximum a posteriori (MMAP) estimation. In this paper, we
present a simple and novel MCMC strategy, called State Augmentation for Marginal
Estimation (SAME), that allows MMAP estimates to be obtained for
Bayesian models. The methodology is very general and we illustrate the
simplicity and utility of the approach by examples in MAP parameter
estimation for Hidden Markov models (HMMs) and for missing data
interpolation in autoregressive time series.
|
SPTM-17.7
|
Gibbs Sampling Approach For Generation Of Multivariate Gaussian Random Variables
Jayesh H Kotecha,
Petar M Djuric (State University of New York at Stony Brook)
In many Monte Carlo simulations, it is important to
generate samples from given densities.
Recently, researchers in statistical signal processing
and related disciplines have shown increased interest
for a generator of random vectors with truncated
multivariate normal probability density functions
(pdf's). A straightforward method for their generation
is to draw samples from the multivariate normal density
and reject the ones that are outside the acceptance
region. This method, which is known as rejection
sampling, can be very inefficient, especially for high
dimensions and/or relatively small supports of the
random vectors. In this paper we propose an approach
for generation of vectors with truncated Gaussian
densities based on Gibbs sampling which is simple to
use and does not reject any of the generated vectors.
|
SPTM-17.8
|
Efficient Computation of the Bayesian Cramer-Rao Bound on Estimating Parameters of Markov Models
Joseph Tabrikian,
Jeffrey L Krolik (Duke University)
This paper presents a novel method for calculating the Hybrid Cramer-Rao lower bound (HCRLB) when the statistical model for the data has a Markovian nature. The method applies to both the non-linear/non-Gaussian as well as linear/Gaussian model. The approach solves the required expectation over unknown random parameters by several one-dimensional integrals computed recursively, thus simplifying a computationally-intensive multi-dimensional integration. The method is applied to the problem of refractivity estimation using radar clutter from the sea surface, where the backscatter cross section is assumed to be a Markov process in range. The HCRLB is evaluated and compared to the performance of the corresponding maximum {\em a-posteriori} estimator. Simulation results indicate that the HCRLB provides a tight lower bound in this application.
|
|