Authors:
Viresh Ratnakar,
Page (NA) Paper number 1007
Abstract:
We describe a color image compression and decompression scheme suitable
for high resolution printers. The proposed scheme requires only two
image rows in memory at any time, and hence is suitable for low-cost,
high-resolution printing systems. The compression ratio can be specified
and is achieved exactly. Compound document images consisting of continuous-tone,
natural regions mixed with synthetic graphics or text are handled with
uniformly high quality. While the target compression ratios are moderate,
the quality requirements are extremely high: the compressed and decompressed
printed image needs to be virtually indistinguishable from the original
printed image. The scheme combines a lossless block coding technique
with a wavelet block codec. The wavelet block codec uses a new and
simple entropy coding technique that is more suitable for the specific
block-structure, compression target, and discrete wavelet transform
used.
Authors:
Xiaohui Xue, Department of Computer Science, Harbin Institute of Technology, P. R. China, 150001 (China)
Wen Gao, Department of Computer Science, Harbin Institute of Technology, P. R. China, 150001 (China)
Page (NA) Paper number 1112
Abstract:
This paper is devoted to prediction-based lossless/near-lossless image
compression algorithm. Within this framework, there are three modules,
including prediction model, statistical model and entropy coding. This
paper focuses on the former two, and puts forward two new methods respectively,
they are, prediction model based on backward adaptive recognition of
local texture orientation (BAROLTO), and Poisson statistical model.
As far as we know, BAROLTO is the best predictor in efficiency. Poisson
model is designed to avoid the context dilution to some extent and
make use of large neighborhood; therefore, we can capture more local
correlation. Experiments show that our compression system (BP) based
on BAROLTO prediction and Poisson model outperforms the products of
IBM and HP significantly.
Authors:
Armando J Pinho, DET / INESC, University of Aveiro, Portugal (Portugal)
Page (NA) Paper number 1434
Abstract:
Recently, a new technique for the lossless encoding of boundary maps
was introduced, which is based on the concept of "transition points".
In this paper we show that, using a simple representation for the transition
points, it is possible to use the JBIG image coding standard for the
encoding of image partitions. Moreover, this new approach outperforms,
in most cases, differential chain-coding both in efficiency and simplicity
of implementation.
Authors:
Debargha Mukherjee,
Sanjit K Mitra,
Page (NA) Paper number 1482
Abstract:
A vector enhancement of Said and Pearlman's Set Partitioning in Hierarchical
Trees (SPIHT) methodology, named VSPIHT, has recently been proposed
for embedded wavelet image compression. While the VSPIHT algorithm
works better than scalar SPIHT for most images, a common vector dimension
to use for coding an entire image may not be optimal. Since statistics
vary widely within an image, a greater efficiency can be achieved if
different vector dimensions are used for coding the wavelet coefficients
from different portions of the image. We present a generalized methodology
for developing a variable dimensional set partitioning coder, where
different parts of an image may be coded in different vectoring modes,
with different scale factors, and upto different number of passes.
A Lagrangian rate-distortion criterion is used to make the optimum
coding choices. Coding passes are made jointly for the vectoring modes
to produce an embedded bitstream.
Authors:
Madoka Hasegawa,
Shigeo Kato,
Yoshifumi Yamada,
Page (NA) Paper number 1608
Abstract:
Multimedia communication systems using super high definition (SHD)
images are widely desired in various communities such as medical imagery,
digital museum and libraries and so on. There are, however, many problems
in SHD image communication systems, because of high pixel accuracy
and high resolution. We considered indispensable functions to be realized
in SHD image application systems. These are summarized to three items,
i.e., reversibility, scalability and progressibility. This paper proposes
a partially decodable coding method for realizing the scalability function.
It is to say, when a whole image cannot be displayed on the monitor,
a reduced image is displayed at first to select a region of interest.
Then image data of the selected region is extracted from the code stream.
For this purpose, a kind of partially decodable code should be introduced.
We propose a new partially decodable coding method based on Golomb-Rice
code.
Authors:
Tage Røsten,
Viktor A Marthinussen,
Tor A Ramstad,
Andrew Perkis,
Page (NA) Paper number 1642
Abstract:
A multi-dimensional variable-length subband coder for pre-stack seismic
data was presented. A 2- and 3-D separable near perfect reconstruction
filter bank was optimized to maximize the coding gain, assuming that
the correlation properties of pre-stack seismic data can be modeled
by directional dependent autoregressive processes. Identical quantization
and entropy coder allocation strategies were utilized to isolate the
compression efficiency of the different high-dimensional filter bank
methods. An example, with compression ratios ranging from 160:1 to
320:1, showed that 3-D subband coding of common shot gathers performed
50% better in terms of bit rate at a given signal-to-noise ratio compared
to 2-D subband coding of common shot gathers.
Authors:
Qin Jiang,
Joon J Lee, Department of Computer Engineering, Dongseo University, Pusan 617-716, South Korea. (Korea)
Monson H Hayes III,
Page (NA) Paper number 1840
Abstract:
Stereo image pair coding is an important issue in stereo data compression.
A wavelet based stereo image pair coding algorithm is proposed in this
paper. The wavelet transform is used to decompose the image into an
approximation image and three edge images. In the wavelet domain, a
disparity estimation technique is developed to estimate the disparity
field using both approximation image and edge images. To improve the
accuracy of estimation of wavelet images produced by the disparity
compensation technique, a novel wavelet based Subspace Projection Technique(SPT)
is developed. In the SPT, the block dependent subspaces are constructed
using block varying basis vectors that are derived from disparity compensated
wavelet images. Experimental results show that the proposed algorithm
is efficient to achieve stereo image compression.
Authors:
Nikola Rozic, Department of Electronics, University of Split, FESB, R.Boskovica bb., HR-21000 Split, Croatia (Croatia)
Dinko Begusić, Department of Electronics, University of Split, FESB, R.Boskovica bb., HR-21000 Split, Croatia (Croatia)
Jurica Ursic, Department of Electronics, University of Split, FESB, R.Boskovica bb., HR-21000 Split, Croatia (Croatia)
Page (NA) Paper number 2162
Abstract:
BCH codes in the frequency domain provide robust channel coding for
image channel coding applications. The underlying problem of estimation
of real/complex sinusoids in white additive noise may be formulated
and solved in different ways. The standard approach is based on the
least squares method and Berlekamp-Massey algorithm (BMA). In this
paper we compare the performance of the BMA with other LS based algorithms
including: minimum norm solution based algorithm (MNS), forward-backward
linear prediction based algorithm (FBLP) and singular-value decomposition
based minimum norm alorithm (SVD-MNA). Results of computer experiments
show that the introduction of minimum norm solution, forward-backward
prediction and the SVD decomposition may significantly improve the
performance of the decoder in the case of the relatively low SNR. In
selecting between the proposed algorithms a performance/complexity
tradeoff has to be considered.
Authors:
Veeraraghavan N Ramaswamy, Bell Laboratories, Lucent Technologies, Holmdel, NJ, USA (USA)
Kamesh R Namuduri, Center for Theoretical Studies, Clark Atlanta University, Atlanta, GA, USA (USA)
Nagarajan Ranganathan, Dept. of ECE, Univ. of Texas at El Paso, TX, USA (USA)
Page (NA) Paper number 2369
Abstract:
The EZW lossless coding framework consists of three stages: (i) a reversible
wavelet transform, (ii) an EZW data structure to order the coefficients
and (iii) an arithmetic coding using context modeling. In this work,
we discuss the various experiments conducted on context modeling of
wavelet coefficients for arithmetic coding to optimize the compression
efficiency. The context modeling of wavelet coefficients can be classified
into two parts: (i) context modeling of significance information and
(ii) context modeling of the remaining or residue information. It was
observed from our experiments while context modeling of residue helped
in achieving considerable compression efficiency, the context modeling
of significance information helped only to a modest extent. Keywords:
lossless, image coding, EZW, wavelet, context modeling.
Authors:
Francesco Rizzo,
James A Storer,
Bruno Carpentieri,
Page (NA) Paper number 1955
Abstract:
Costantinescu and Storer in 1993 introduced a single-pass vector quantization
algorithm that with no specific traininign or prior knowledge of the
data was able to achieve better compression results with respect to
the JPEG standard, along with a number of computational advances such
as: adjustable fidelity/compression tradeoff, precise guarantees on
any lxl sub-block of the image, fast table-lookup decoding. In this
paper we improve that basic algorithm by blending it with the mean
shape-gain vector quantization (MSGVQ) compression scheme. This blending
allows a slightly better performance in terms of compression and a
clear improvement in visual quality.
|