Chair: Barry Sullivan, Ameritech (USA)
Christian Heinrich, CNRS-ESE-UPS (FRANCE)
Jean-Francois Bercher, CNRS-ESE-UPS (FRANCE)
Guy Le Besnerais, CNRS-ESE-UPS (FRANCE)
Guy Demoment, CNRS-ESE-UPS (FRANCE)
The subject of this communication is the restoration of spiky sequences distorted by a linear system and corrupted by an additive noise. A (now) classical way of coping with this problem is to use a Bayesian approach with a Bernoulli-Gaussian prior model of the sequence. We will refine this method using a Bernoulli-Gaussian plus Gaussian prior model. This estimation method requires maximization of a posterior probability distribution, which cannot be performed optimally. Thus we propose here a new non-Bayesian estimation scheme, derived from the Kullback-Leibler information or cross-entropy. This quite general method, called the Maximum Entropy on the Mean Method, is firmly based on convex analysis and yields a unique solution which can be efficiently calculated in practice, and which is, in this sense, truly optimal. As a conclusion, we present results obtained with both methods on a synthetic case.
Anurag Bist, Rockwell International Corporation (USA)
We address the problem of approximating the quantization noise spectra when a Gauss-Markov process is input to a sigma-delta modulator. The process is modeled using a state space approach. Fine quantization approximations are used to derive closed form expressions for the quantization noise spectra. Result of this analysis support the previous results from more rigorous analyses for both deterministic as well as random inputs. Finally, by fixing the transmission rate, we compare the smoothed error performance of the sigma-delta modulation system with several previously analyzed state quantization schemes.
Hyeokho Choi, University of Illinois at Urbana - Champaign (USA)
David C. Munson Jr., University of Illinois at Urbana - Champaign (USA)
We study the problem of interpolating a bandlimited signal from its nonuniform samples. We consider a class of interpolation algorithms that includes the least-squares optimal interpolator proposed by J. L. Yen, and we derive a closed-form expression of the interpolation error for interpolators of this type. The expression for the interpolation error shows that the error depends on the eigenvalue distribution of a matrix, which is specified by the set of sampling points. We notice that the usual sinc-kernel interpolator is an approximation to the Yen interpolator, and we suggest a method of choosing the weighting coefficients in the sinc-kernel interpolator. The new sinc-kernel interpolator is superior to the sinc interpolator with the usual Jacobian (area) weighting and is far easier to implement than the Yen interpolator.
S. Dharanipragada, University of Illinois at Urbana-Champaign
K.S. Arun, University of Michigan (USA)
Resolution analysis for the problem of signal recovery from finitely many linear samples is the subject of this paper. The classical Rayleigh limit serves only as a lower bound on resolution since it does not assume any recovery strategy and is based only on observed data. We show that details finer than the Rayleigh limit can be recovered by simple linear processing that incorporates prior information. We first define a measure of resolution based on allowable levels of error that is more appropriate for current signal recovery strategies than the Rayleigh definition. In the practical situation in which only finitely many noisy observations are available, we have to restrict the class of signals in order to make the resolution measure meaningful. We consider the set of bandlimited and essentially timelimited signals since it describes most signals encountered in practice. For this set we show how to precompute resolution limits from knowledge of measurement functionals, signal-to-noise ratio, passband, energy concentration regions, energy concentration factor, and a prescribed level of error tolerance. In the process we also derive an algorithm for high resolution signal recovery.
Dinei A.F. Florencio, Georgia Institute of Technology (USA)
Ronald W. Schafer, Georgia Institute of Technology (USA)
Sampling and reconstruction are usually analyzed under the framework of linear signal processing. Powerful tools like the Fourier transform and optimum linear filter design techniques, allow for a very precise analysis of the process. In particular, an optimum linear filter of any length can be derived under most situations. Many of these tools are not available for non-linear systems, and it is usually difficult to find an optimum non-linear system under any criteria. In this paper we analyze the possibility of using non-linear filtering in the interpolation of subsampled images. We show that a very simple (5x5) non-linear reconstruction filter outperforms (for the images analyzed) linear filters of up to 256x256, including optimum (separable) Wiener filters of any size.
Ashutosh Sabharwal, Ohio State University (USA)
Lee C. Potter, Ohio State University (USA)
In most estimation and design problems, there exists more than one solution that satisfies all constraints. In this paper, we address the problem of estimating the complete set of feasible solutions. Multiple feasible solutions are frequently encountered in signal restoration, image reconstruction, array processing, system identification and filter design. An estimate of the size of the feasibility set can be utilized to quantitatively evaluate inclusion and effectiveness of added constraints. Further, set estimation can be used to determine a null feasibility set. We compute ellipsoidal approximations to the set of feasible solutions using a new ellipsoid algorithm and the method of analytic centers. Both algorithms admit multiple convex constraint sets with ease. Also, the algorithms provide a solution which is guaranteed to be in the interior of the feasibility set.
Hyunduk Ahn, University of Michigan (USA)
We solve the 2-D discrete phase retrieval problem by partitioning it into a mostly-DECOUPLED set of 1-D phase retrieval problems. The discrete and modulated Radon transforms are used to formulate two coupled 1-D problems, the solution to which then specifies solutions to the other DECOUPLED 1-D problems. The latter may in turn be solved in parallel; however, using the solution to one problem as the input to a neighboring problem reduces the computation significantly for serial computers. Unlike other exact 2-D phase retrieval methods which rely on tracking zero curves of algebraic functions or equivalent operations, no continuous-function-based methods are used here. This makes the procedure more robust numerically.
Todd Findley Brennan, MIT Lincoln Laboratory
Paul H. Milenkovic, University of Wisconsin - Madison (USA)
A novel method is introduced for resampling irregularly sampled data in the presence of noise. The estimator is minimum variance (MV) and minimum mean square error, under Gaussian assumptions, and well-conditioned in general. The Shannon-Whittaker sampling theorem is generalized to use raised cosine pulses as basis functions. It is shown that this generalization permits fast estimation with $O(N)$ computational requirements for mildly oversampled signals (bandwidth less than $0.9B_N,$ where $B_N$ is the Nyquist bandwidth of the resampled data). Also, some extensions of the inverse estimator and its error characteristics are discussed.
Hiroshi Shimotahira, ATR Optical & Radio Telecommunications Research Laboratories (JAPAN)
The kernel MUSIC (Multiple Signal Classification) algorithm was proposed as an improvement over the existing MUSIC algorithm. The proposed algorithm is based on the orthogonality between the image and kernel space of an Hermitian mapping constructed from a signal. The major part of computation is Gaussian elimination of a matrix- required processing time as a function of data number growth slower than that of the existing MUSIC based on eigenstate analysis. And thus this algorithm is advantageous for processing large size data. This algorithm was applied to image reconstruction process of a laser radar and its superior spatial resolution higher than that limited by wavelength scanning range was demonstrated.
J. Zhang, McMaster University (CANADA)
K.M. Wong, McMaster University (CANADA)
Q. Jin, McMaster University (CANADA)
Q. Wu, McMaster University (CANADA)
In this paper, using the cyclostationarity of signals, two new kinds of adaptive frequency shift filters are proposed. One is LMS Adaptive FRESH Filter. The other is Blind Adaptive FRESH Filter. By exploiting the spectral correlation of cyclostationary signals, these adaptive filters can separate the signals which overlap in both frequency and time domain. Theoretical development and simulations of these filters are given in this paper. The results show that for signals that spectrally overlap, the adaptive FRESH filters can perform very well while ordinary adaptive filters fail. The choice of the adaptive FRESH filtering method depends on the conditions under which it is applied.