Home
 Mirror Sites
 General Information
 Confernce Schedule
 Technical Program
 Tutorials
 Industry Technology Tracks
 Exhibits
 Sponsors
 Registration
 Coming to Phoenix
 Call for Papers
 Author's Kit
 On-line Review
 Future Conferences
 Help
|
Abstract: Session IMDSP-2 |
|
IMDSP-2.1
|
TROBIC: Two-Row Buffer Image Compression
Viresh Ratnakar (Epson Palo Alto Laboratory)
We describe a color image compression and decompression scheme suitable for high resolution printers. The proposed scheme requires only two image rows in memory at any time, and hence is suitable for low-cost, high-resolution printing systems. The compression ratio can be specified and is achieved exactly. Compound document images consisting of continuous-tone, natural regions mixed with synthetic graphics or text are handled with uniformly high quality. While the target compression ratios are moderate, the quality requirements are extremely high: the compressed and decompressed printed image needs to be virtually indistinguishable from the original printed image. The scheme combines a lossless block coding technique with a wavelet block codec. The wavelet block codec uses a new and simple entropy coding technique that is more suitable for the specific block-structure, compression target, and discrete wavelet transform used.
|
IMDSP-2.2
|
PREDICTION BASED ON BACKWARD ADAPTIVE RECOGNITION OF LOCAL TEXTURE ORIENTATION AND POISSON STATISTICAL MODEL FOR LOSSLESS/NEAR-LOSSLESS IMAGE COMPRESSION
Xiaohui Xue,
Wen Gao (Department of Computer Science, Harbin Institute of Technology, P. R. China, 150001)
This paper is devoted to prediction-based
lossless/near-lossless image compression algorithm.
Within this framework, there are three modules,
including prediction model, statistical model and
entropy coding. This paper focuses on the former two,
and puts forward two new methods respectively, they
are, prediction model based on backward adaptive
recognition of local texture orientation (BAROLTO),
and Poisson statistical model. As far as we know,
BAROLTO is the best predictor in efficiency. Poisson
model is designed to avoid the context dilution to
some extent and make use of large neighborhood;
therefore, we can capture more local correlation.
Experiments show that our compression system (BP)
based on BAROLTO prediction and Poisson model
outperforms the products of IBM and HP significantly.
|
IMDSP-2.3
|
Encoding of image partitions using a standard technique for lossless image compression
Armando J Pinho (DET / INESC, University of Aveiro, Portugal)
Recently, a new technique for the lossless encoding of boundary maps was
introduced, which is based on the concept of "transition points". In this
paper we show that, using a simple representation for the transition points,
it is possible to use the JBIG image coding standard for the encoding of
image partitions. Moreover, this new approach outperforms, in most cases,
differential chain-coding both in efficiency and simplicity of implementation.
|
IMDSP-2.4
|
Generalized variable dimensional set partitioning for embedded wavelet image compression
Debargha Mukherjee,
Sanjit K Mitra (Department of ECE, University of California, Santa Barbara)
A vector enhancement of Said and Pearlman's Set
Partitioning in Hierarchical Trees (SPIHT) methodology,
named VSPIHT, has recently been proposed for embedded
wavelet image compression. While the VSPIHT algorithm
works better than scalar SPIHT for most images, a
common vector dimension to use for coding an entire
image may not be optimal. Since statistics vary widely
within an image, a greater efficiency can be achieved
if different vector dimensions are used for coding the
wavelet coefficients from different portions of the
image. We present a generalized methodology for
developing a variable dimensional set partitioning
coder, where different parts of an image may be coded
in different vectoring modes, with different scale
factors, and upto different number of passes. A
Lagrangian rate-distortion criterion is used to make
the optimum coding choices. Coding passes are made
jointly for the vectoring modes to produce an embedded
bitstream.
|
IMDSP-2.5
|
A Partially Decodable Code for Scalable Compression of Super High Definition Images
Madoka Hasegawa,
Shigeo Kato,
Yoshifumi Yamada (Faculty of Engineering, Utsunomiya University)
Multimedia communication systems using super high definition (SHD)
images are widely desired in various communities such as medical
imagery, digital museum and libraries and so on. There are, however,
many problems in SHD image communication systems, because of high
pixel accuracy and high resolution. We considered indispensable
functions to be realized in SHD image application systems. These are
summarized to three items, i.e., reversibility, scalability and
progressibility. This paper proposes a partially decodable coding
method for realizing the scalability function. It is to say, when a
whole image cannot be displayed on the monitor, a reduced image is
displayed at first to select a region of interest. Then image data
of the selected region is extracted from the code stream. For this
purpose, a kind of partially decodable code should be introduced.
We propose a new partially decodable coding method based on Golomb-Rice code.
|
IMDSP-2.6
|
Filter Bank Optimization for High-Dimensional Compression of Pre-Stack Seismic Data
Tage Røsten,
Viktor A Marthinussen,
Tor A Ramstad,
Andrew Perkis (Dept. of Telecomm., Norwegian Univ. of Sci. and Tech.)
A multi-dimensional
variable-length subband coder for pre-stack seismic
data was presented. A 2- and 3-D separable near
perfect reconstruction filter bank
was optimized to maximize the coding gain,
assuming that the correlation
properties of pre-stack seismic data can be
modeled by directional
dependent autoregressive processes. Identical
quantization and entropy
coder allocation strategies were utilized to
isolate the compression
efficiency of the different high-dimensional filter
bank methods. An example, with compression ratios
ranging from 160:1 to 320:1, showed that 3-D
subband coding of common shot gathers
performed 50 % better in terms of bit rate at a
given signal-to-noise ratio compared to 2-D
subband coding of common shot gathers.
|
IMDSP-2.7
|
A Wavelet Based Stereo Image Coding Algorithm
Qin Jiang (Center for Signal and Image Processing, School of Electrical & Computer Engineering, Georgia Institute of Technology.),
Joon J Lee (Department of Computer Engineering, Dongseo University, Pusan 617-716, South Korea.),
Monson H Hayes, III (Center for Signal and Image Processing, School of Electrical & Computer Engineering, Georgia Institute of Technology.)
Stereo image pair coding is an important issue in stereo data
compression.
A wavelet based stereo image pair coding algorithm is proposed in
this paper.
The wavelet transform is used to decompose the image into an
approximation image and three edge images.
In the wavelet domain, a disparity estimation technique is developed
to estimate the disparity field using both approximation image and
edge images.
To improve the accuracy of estimation of wavelet images produced by
the disparity compensation technique, a novel wavelet based Subspace
Projection Technique(SPT) is developed. In the SPT, the block
dependent subspaces are constructed using block varying basis
vectors that are derived from disparity compensated wavelet images.
Experimental results show that the proposed algorithm is efficient
to achieve stereo image compression.
|
IMDSP-2.8
|
Least Squares Based Decoding for BCH Codes In Image Applications
Nikola Rozic,
Dinko Begusic,
Jurica Ursic (Department of Electronics, University of Split, FESB, R.Boskovica bb., HR-21000 Split, Croatia)
BCH codes in the frequency domain provide robust channel coding for image channel coding applications. The underlying problem of estimation of real/complex sinusoids in white additive noise may be formulated and solved in different ways. The standard approach is based on the least squares method and Berlekamp-Massey algorithm (BMA). In this paper we compare the performance of the BMA with other LS based algorithms including: minimum norm solution based algorithm (MNS), forward-backward linear prediction based algorithm (FBLP) and singular-value decomposition based minimum norm alorithm (SVD-MNA). Results of computer experiments show that the introduction of minimum norm solution, forward-backward prediction and the SVD decomposition may significantly improve the performance of the decoder in the case of the relatively low SNR. In selecting between the proposed algorithms a performance/complexity tradeoff has to be considered.
|
IMDSP-2.9
|
Context Modeling of Wavelet Coefficients in EZW-Based Lossless Image Coding
Veeraraghavan N Ramaswamy (Bell Laboratories, Lucent Technologies, Holmdel, NJ, USA),
Kamesh R Namuduri (Center for Theoretical Studies, Clark Atlanta University, Atlanta, GA, USA),
Ranganathan Nagarajan (Dept. of ECE, Univ. of Texas at El Paso, TX, USA)
The EZW lossless coding framework consists of three stages:
(i) a reversible wavelet transform, (ii) an EZW data structure to order the coefficients
and (iii) an arithmetic coding using context modeling.
In this work, we discuss the various experiments
conducted on context modeling of wavelet coefficients
for arithmetic coding to optimize the compression efficiency.
The context modeling of wavelet coefficients can be classified into two parts:
(i) context modeling of significance information and (ii) context modeling of
the remaining or residue information. It was observed from our experiments while
context modeling of residue helped in achieving considerable compression efficiency,
the context modeling of significance information helped only to a modest extent.
Keywords: lossless, image coding, EZW, wavelet, context modeling.
|
IMDSP-2.10
|
Improving Single-Pass Adaptive VQ
Francesco Rizzo,
James A Storer (Brandeis University, COSI Dept.),
Bruno Carpentieri (Universita' di Salerno, Dipartimento di Informatica e Applicazioni)
Costantinescu and Storer in 1993 introduced a single-pass
vector quantization algorithm that with no specific traininign or
prior knowledge of the data was able to achieve better compression
results with respect to the JPEG standard, along with a number of
computational advances such as: adjustable fidelity/compression
tradeoff, precise guarantees on any lxl sub-block of the
image, fast table-lookup decoding. In this paper we improve that basic
algorithm by blending it with the mean shape-gain vector quantization
(MSGVQ) compression scheme. This blending allows a slightly better
performance in terms of compression and a clear improvement in visual
quality.
|
|