Chair: Michael Unser, National Institutes of Health (USA)
Yoram Bresler, University of Illinois at Urbana- Champaign (USA)
We present an overview of two recent developments in tomography: design of optimum scan patterns for time-varying objects; and an efficient iterative edge-preserving algorithm for limited-angle data. The first development involves the introduction of a new problem definition and changing the ``rules of the game'' by unconventional acquisition formats. It also employ the mathematical tools of lattice theory, which have seen relatively little use in this area. The second development, on the other hand, addresses a classical and long standing problem, by a combination of more rigorous analysis and modeling, with some heuristic twists. Both techniques provide considerable improvements over conventional or previously proposed approaches.
Charles A. Bouman, Purdue University
Ken Sauer, University of Notre Dame
Suhail S. Saquib, Purdue University (USA)
Bayesian methods have proven to be powerful tools for computed tomographic reconstruction in realistic physical problems. However, Bayesian methods require that a number of modeling and computational problems be addressed. This paper summarizes a coherent system of statistical modeling and optimization techniques designed to facilitate efficient, unsupervised Bayesian emission and transmission tomographic reconstruction. New results are included on improved convergence behavior of these methods.
Paul S. Lewis, Los Alamos National University
John C. Mosher, Los Alamos National University
Richard M. Leahy, University of Southern California (USA)
In neuromagnetic source reconstruction, a functional map of neural activity is constructed from noninvasive magnetoencephalographic (MEG) mea- surements. The overall reconstruction problem is under-determined, so some form of source modeling must be applied. We review the two main classes of reconstruction techniques-parametric current dipole models and nonparametric distributed source reconstructions. Current dipole reconstructions use a physically plausible source model, but are limited to cases in which the neural currents are expected to be highly sparse and localized. Distributed source reconstructions can be applied to a wider vari- ety of cases, but must incorporate an implicit source model in order to arrive at a single reconstruction. We examine distributed source reconstruc- tion in a Bayesian framework to highlight the implicit nonphysical Gauss- ian assumptions of minimum norm based reconstruction algorithms. We conclude with a brief discussion of alternative non-Gaussian approachs.
Oleh Tretiak, Drexel University (USA)
Alberto Goldszal, Drexel University (USA)
We discuss spatial transformation techniques for multidimensional image registration. Examples from Brain Mapping are given. We illustrate the problem by reviewing point matching, which is the 'gold standard' in this area. The point matching formulation is not adequate for all applications, and we describe various extensions and generalizations. One important issue is multi-set registration, others are more general nonlinear models, as well as the problems of registration on the basis of intensity data and when the geometrical objects to be registered are curves, surfaces, or volumes. This leads to many interesting problems that deserve further study. Finally, we discuss the accuracy in registration. The formulation and measurement of accuracy in geometric matching are challenging and require further research.
Philippe Thevenaz, National Institutes of Health (USA)
Michael Unser, National Institutes of Health (USA)
We present a general framework for the fast, high quality implementation of geometric affine transformations of images (p=2) or volumes (p=3), including rotations and scaling. The method uses a factorization of the pxp transformation matrix into p+1 elementary matrices, each affecting one dimension of the data only. This yields a separable implementation through an appropriate sequence of 1-D affine transformations (scaling + translation). Each elementary transformation is implemented in an optimal least squares sense using a polynomial spline signal model. We consider various matrix factorizations and compare our method with the conventional non-separable interpolation approach. The new method provides essentially the same quality results and at the same time offers significant speed improvement.
Dennis Healy, Dartmouth College (USA)
John Weaver, Dartmouth College (USA)
We discuss the advantages and disadvantages of using a Karhunen- Loeve (K-L) expansion of a training set of images to reduce the number of encodes required for a Magnetic Resonance (MR) image of a new object. One form of this technique has been proposed [1] and another implementation [2]. We evaluate the error likely to be achieved as a function of the number of encodes and some two technical problems: reduced SNR in the images and smoothing of the K-L functions in practice. As an alternative, we propose the use of joint best bases [3] derived from the local trigonometric library as an approximation to the K-L basis. These bases approach the rate-distortion characteristic achieved by the K-L basis, but they are easier to use in MRI and can be applied with existing methods for fast acquisition.