Session: IMDSP-L4
Time: 3:30 - 5:30, Wednesday, May 9, 2001
Location: Room 251 D
Title: Image Coding 2
Chair: Robert Gray

3:30, IMDSP-L4.1
GENERALIZED S TRANSFORM
M. ADAMS, F. KOSSENTINI
The generalized S transform (GST), a family of reversible integer-to-integer transforms inspired by the S transform, is proposed. This family of transforms is then studied in some detail. For example, the relationship between the GST and lifting scheme is discussed, and the effects of choosing different GST parameters are examined. Some examples of specific transforms in the GST family are also given.

3:50, IMDSP-L4.2
FAST AND MEMORY EFFICIENT JBIG2 ENCODER
Y. YE, P. COSMAN
In this paper we propose a fast and memory efficient encoding strategy for text image compression with the JBIG2 standard. The encoder splits up the input image into horizontal stripes and encodes one stripe at a time. Construction of the current dictionary is based on updating dictionaries from previous stripes. We describe separate updating processes for the singleton exclusion dictionary and for the modified-class dictionary. Experiments show that, for both dictionaries, splitting the page into two stripes can save 30% of encoding time and 40% of physical memory with a small loss of about 1.5% in compression. Further gains can be obtained by using more stripes but the return diminishes after 6 stripes. The same updating processes are also applied to compressing multi-page document images and shown to improve compression by 8-10% over coding a multi-page document as a collection of single-page documents.

4:10, IMDSP-L4.3
EFFICIENT IMAGE REPRESENTATION BY ANISOTROPIC REFINEMENT IN MATCHING PURSUIT
P. VANDERGHEYNST, P. FROSSARD
This paper presents a new image representation method based on anisotropic refinement. It has been shown that wavelets are not optimal to code 2-D objects which need true 2-D dictionaries for efficient approximation. We propose to use rotations and anisotropic scaling to build a real bi-dimensional dictionary. Matching Pursuit then stands as a natural candidate to provide an image representation with an anisotropic refinement scheme. It basically decomposes the image as a series of basis functions weighted by their respective coefficients. Even if the basis functions can a priori take any form bi-dimensional dictionaries are almost exclusively composed of two-dimensional Gabor functions. We present here a new dictionary design by introducing orientation and anisotropic refinement of a gaussian generating function. The new dictionary permits to efficiently code 2-D objects and more particularly oriented contours. It is shown to clearly outperform common non-oriented Gabor dictionaries.

4:30, IMDSP-L4.4
REGION-BASED NEAR-LOSSLESS IMAGE COMPRESSION
A. PINHO
We present a near-lossless technique for the compression of images, which is based on the partitioning of the image into regions of constant intensity. The boundary information associated with the image partition is encoded with the method of the transition points. The compression of the intensities of the regions is based on the usual entropy encoding of the context-modeled prediction residuals. The experimental results show that this approach is able to provide significant compression improvements in images having sparse histograms, for small $L_{\infty}$ errors.

4:50, IMDSP-L4.5
SEISMIC DATA COMPRESSION USING GULLOTS
L. DUVAL, T. NAGAI
Recent works have shown that GenLOT coding is a very effective technique for compressing seismic data. The role of a transform in a coder is to concentrate information and reduce statistical redundancy. When used with embedded zerotree coding, GenLOTs often provide superior performance to traditional block oriented algorithms or to wavelets. In this work we investigate the use of Generalized Unequal Length Lapped Orthogonal Transforms (GULLOT). Their shorter bases for high-frequency components are suitable for reducing ringing artifacts in images. While GULLOTs yield comparable performance to GenLOTs on smooth seismic signals like stacked sections, they achieve improved performance on less smooth signals such as shot gathers.

5:10, IMDSP-L4.6
GAUSS MIXTURE VECTOR QUANTIZATION
R. GRAY
Gauss mixtures are a popular class of models in statistics and statistical processing because they can provide good fits to smooth densities, because they have a rich theory, and because the can be well estimated by existing algorithms such as the EM algorithm. We here extend an information theortic extremal property for source coding from Gaussian sources to Gauss mixtures using high rate quantization theory and extend a method originally used for LPC speech vector quantization to provide a Lloyd clustering approach to the design of Gauss mixture models. The theory provides formulas relating minimum discrimination information (MDI) for model selection and the mean squared error resulting when the MDI criterion is used in an optimized robust classified vector quantizer. It also provides motivation for the use of Gauss mixture models for robust compression systems for general random vectors.