Authors:
Yu-Len Huang, Department of Computer Science and Information Engineering, National Chung Cheng University, Taiwan, R.O.C. (Taiwan)
Ruey-Feng Chang, Department of Computer Science and Information Engineering, National Chung Cheng University, Taiwan, R.O.C. (Taiwan)
Page (NA) Paper number 1088
Abstract:
In this paper, we present nonlinear interpolation schemes for image
resolution enhancement. The Multilayer perceptron (MLP) interpolation
schemes based on the wavelet transform and subband filtering are proposed.
Because estimating each sub-image signal is more effective than estimating
the whole image signal, pixels in the low-resolution image are used
as input signal of the MLP to estimate all of the wavelet sub-image
of the corresponding high-resolution image. The image of increased
resolution is finally produced by the synthesis procedure of wavelet
transform. As compared with other popular methods, the results show
that the improvement is remarkable. The detail simulation results of
interpolated images and image sequences can be found in web page: http://www.cs.ccu.edu.tw/~hyl/wmi/.
Authors:
Pascal Monasse,
Page (NA) Paper number 1250
Abstract:
We propose a method for image registration which seems to be useful
under the three following conditions. First, both images are globally
and roughly the result of a translation and rotation. Second, some
occlusions due to moving objects occur from image 1 to image 2. Third,
because of changes of illumination, contrast may have changed globally
and even locally. Under such unfavorable conditions, correlation-based
global registration may become inaccurate, because of the global compromise
it yields between several displacements. Our method avoids these difficulties
by defining a set of local contrast invariant features in order to
achieve contrast invariant matching. A voting procedure allows to eliminate
"wrong" matching features due to the displacement of small objects
and yields sub-pixel accuracy. This method was tested successfully
for registration of watches with moving hands and for road control
applications.
Authors:
Gözde Bozkurt,
Ahmet Enis Çetin,
Page (NA) Paper number 1265
Abstract:
Halftoning is a process that deliberately injects noise into the original
image in order to obtain visually pleasing output images with a smaller
number of bits per pixel for displaying or printing purposes. In this
paper, a novel inverse halftoning method is proposed to restore a continuous
tone image from the given halftone image. A set theoretic formulation
is used where three sets are defined using the prior information about
the problem. A new space domain projection is introduced assuming the
halftoning is performed with error diffusion, and the error diffusion
filter kernel is known. The space domain, frequency domain, and space-scale
domain projections are used alternately to obtain a feasible solution
for the inverse halftoning problem which does not have a unique solution.
Authors:
- Wirawan,
Pierre Duhamel,
Henri Ma^itre,
Page (NA) Paper number 1391
Abstract:
We address the reconstruction problem of a high resolution image from
its undersampled measurements accross multiple FIR channels with unknown
response. Our method consists of two stages: blind multi-input-multi-output
(MIMO) deconvolution using FIR filters and blind separation of mixed
polyphase components. The proposed deconvolution method is based in
the mutually referenced equalizers (MRE) algorithm previously developed
for blind equalization in digital communications. For source separation,
a method is proposed for separating mixed polyphase components of a
bandlimited signal. The existing blind sources separation algorithms
assume that the source signals are either independent or uncorrelated,
which is not the case when the sources are polyphase components of
a bandlimited signal. Simulation results on artificial and photographics
images are given.
Authors:
Sophie Chardon,
Benoit Vozel,
Kacem Chehdi,
Page (NA) Paper number 1398
Abstract:
In pattern recognition problems, the effectiveness of the analysis
depends heavily on the quality of the image to be processed. This image
may be blurred and/or noisy and the goal of digital image restoration
is to find an estimate of the original image. A fundamental issue in
this process is the blur estimation. When the blur is not readily avalaible,
it has to be estimated from the observed image. Two main approaches
can be found in the literature. The first one identify the blur parameters
before any restoration whereas the second one realizes these two steps
jointly. We present a comparative study of several parametric blur
estimation methods, based on a parametric ARMA modeling of the image,
belonging to the first approach. Our purpose is to evaluate the acuracy
of the various methods, on which the restoration procedure relies,
and their robustness to modeling assumptions, noise, and size of support.
Authors:
Stephen E Reichenbach,
Frank Geng,
Page (NA) Paper number 1795
Abstract:
This paper describes two-dimensional, non-separable piecewise polynomial
convolution for image reconstruction. We investigate a two-parameter
kernel with support [-2,2]x[-2,2] and constrained for smooth reconstruction.
Performance reconstructing a sampled random Markov field is superior
to the traditional one-dimensional cubic convolution algorithm.
Authors:
Ramesh Neelamani, Department of Electrical and Computer Engineering, Rice University, Houston, TX 77251-1892, USA (USA)
Hyeokho Choi, Department of Electrical and Computer Engineering, Rice University, Houston, TX 77251-1892, USA (USA)
Richard G Baraniuk, Department of Electrical and Computer Engineering, Rice University, Houston, TX 77251-1892, USA (USA)
Page (NA) Paper number 2058
Abstract:
In this paper, we propose a new approach to wavelet-based deconvolution.
Roughly speaking, the algorithm comprises Fourier-domain system inversion
followed by wavelet-domain noise suppression. Our approach subsumes
a number of other wavelet-based deconvolution methods. In contrast
to other wavelet-based approaches, however, we employ a regularized
inverse filter, which allows the algorithm to operate even when the
inverse system is ill-conditioned or non-invertible. Using a mean-square-error
metric, we strike an optimal balance between Fourier-domain and wavelet-domain
regularization. The result is a fast deconvolution algorithm ideally
suited to signals and images with edges and other singularities. In
simulations with real data, the algorithm outperforms the LTI Wiener
filter and other wavelet-based deconvolution algorithms in terms of
both visual quality and MSE performance.
Authors:
Alfredo Restrepo,
Scott T Acton,
Page (NA) Paper number 2086
Abstract:
We introduce binary locally monotonic regression as a first step in
the study of the application of local monotonicity for image estimation.
Given an algorithm that generates a similar locally monotonic image
from a given image, we can specify both the scale of the image features
retained and the image smoothness. In contrast to the median filter
and to morphological filters, a locally monotonic regression produces
the optimally similar locally monotonic image. Locally monotonic regression
is a computationally expensive technique, and the restriction to binary-range
signals allows the use of Viterbi-type algorithms. Binary locally monotonic
regression is a powerful tool that can be used in the solution of the
image estimation, image enhancement, and image segmentation problems.
Authors:
Nhat Nguyen,
Gene Golub,
Peyman Milanfar,
Page (NA) Paper number 2092
Abstract:
Superresolution reconstruction produces a high resolution image from
a set of low resolution images. Previous work had not adequately addressed
the computational issues for this problem. In this paper, we propose
efficient block circulant preconditioners for solving the regularized
superresolution problem by CG. Effectiveness of our preconditioners
is demonstrated with superresolution results for a simulated image
sequence and a FLIR image sequence.
Authors:
Mehmet Kivanç Mihçak,
Igor Kozintsev,
Kannan Ramchandran,
Page (NA) Paper number 2398
Abstract:
This paper deals with the application to denoising of a very simple
but effective "local" spatially adaptive statistical model for the
wavelet image representation that was recently introduced successfully
in a compression context. Motivated by the intimate connection between
compression and denoising, this paper explores the significant role
of the underlying statistical wavelet image model. The model used here,
a simplified version of the one in , is that of a mixture process
of independent component fields having a zero-mean Gaussian distribution
with unknown variances (sigma)(k)^2 that are slowly spatially-varying
with the wavelet coefficient location k. We propose to use this model
for image denoising by initially estimating the underlying variance
field using a Maximum Likelihood (ML) rule and then applying the Minimum
Mean Squared error (MMSE) estimation procedure. In the process of variance
estimation, we assume that the variance field is ``locally'' smooth
to allow its reliable estimation, and use an adaptive window-based
estimation procedure to capture the effect of edges. Our denoising
results compare favorably with the best reported results in the recent
denoising literature.
Authors:
Rafael Molina,
Aggelos K Katsaggelos,
Javier Abad,
Page (NA) Paper number 2426
Abstract:
In this paper the subband decomposition of a single channel image restoration
problem is examined. The decomposition is carried out in the image
model (prior model) in order to take into account the frequency activity
of each band of the original image. The hyperparameters associated
with each band together with the original image are rigorously estimated
within the Bayesian framework. Finally, the proposed method is tested
and compared with other methods on real images.
Authors:
Isao Yamada,
Masanori Kato,
Kohichi Sakaniwa,
Page (NA) Paper number 5004
Abstract:
In this paper, we propose a simple set-theoretic blind deconvolution
scheme based on a recently developed convex projection technique called
Hybrid Steepest Descent Methods. The scheme is essentially motivated
by Kundur and Hatzinakos' idea that minimizes a certain cost function
uniformly reflecting all a priori informations such that (i) nonnegativity
of the true image and (ii) support size of the original object. The
most remarkable feature of the proposed scheme is that the proposed
one can utilize each a priori information separately from other ones,
where some partial informations are treated in a set-theoretic sense
while the others are incorporated in a cost function to be minimized.
|