Chair: Michels Barlaud, CNRS (FRANCE)
Michael Lightstone, University of California (USA)
Kenneth Rose, University of California (USA)
Sanjit K. Mitra, University of California (USA)
The optimal design of quadtree-based vector quantizers is addressed. Until now, work in this area has focused on optimizing the quadtree structure for a given set of leaf quantizers with little attention spent on the design of the quantizers themselves. In cases where the leaf quantizers were considered, codebooks were optimized without regard to the ultimate quadtree segmentation. However, it is not sufficient to consider each problem independently, as separate optimization leads to an overall suboptimal solution. Rather, joint design of the quadtree structure and the leaf codebooks must be considered for overall optimality. The method we suggest is a quadtree constrained version of the entropy-constrained vector quantization design method. To this end, a centroid condition for the leaf codebooks is derived that represents a necessary optimality condition for variable-rate quadtree coding. This condition, when iterated with the optimal quadtree segmentation strategy of Sullivan and Baker results in a monotonically descending rate-distortion cost function, and consequently, an (at least locally) optimal quadtree solution.
Wenshiung Chen, Feng Chia University (REPUBLIC OF CHINA)
En-Hui Yang, University of Southern California (USA)
Zhen Zhang, University of Southern California (USA)
In this paper, a variant of address vector quantization (ADVQ) algorithm for image compression using conditional entropy lossless coding is presented. The motivation of the proposed approach is derived from Shannon's basic entropy concept that conditional entropy is less than joint entropy.
Iole Moccagatta, Swiss Federal Institute of Technology at Lausanne (SWITZERLAND)
Murat Kunt, Swiss Federal Institute of Technology at Lausanne (SWITZERLAND)
In this paper we present a coding scheme for color images aiming at high compression ratios. It is based on perceptually classified Vector Quantization (VQ), where the different classes are chosen to achieve a better quality of the decoded image. Spatial correlation is reduced by a tree structured wavelet decomposition, then prediction of the insignificant coefficients is performed across subbands. Afterwards, the reduced set of data is organized in vectors in such a way that residual intra and inter band correlation are exploited. Finally, such vectors are coded by a classified VQ. Reconstructed images with good quality and a compression ratio higher that 100:1 have been produced by the proposed scheme, which has been shown to outperform the JPEG standard.
Christopher Chan, University of California at Berkeley (USA)
Martin Vetterli, University of California at Berkeley (USA)
This paper describes an effort to extend the Lempel-Ziv algorithm to a practical universal lossy compression algorithm. It is based on the idea of approximate string matching with a rate-distortion (R-D) criterion, and is addressed within the framework of vector quantization (VQ). A practical one pass algorithm for VQ codebook construction and adaptation for individual signals is developed which assumes no prior knowledge of the source statistics and involves no iteration. We call this technique rate-distortion Lempel-Ziv (RDLZ). As in the case of the Lempel-Ziv algorithm, the encoded bit stream consists of codebook (dictionary) updates as well as indices (pointers) to the codebook. The idea of trading bits for distortion in modifying the codebook will be introduced. Experimental results show that, for Gaussian sources as well as real images, RDLZ performs comparably, sometimes favorably, to static codebook VQ trained on the corresponding sources or images.
Chang Y. Choo, San Jose State University
Erik Kristenson, San Jose State University
Nasser M. Nasrabadi, State University of New York
Xiaonong Ran, National Semiconductor Corporation (USA)
One of the problems in vector quantization (VQ) is its relatively long encoding time especially when an exhaustive search is made for the codevector. This paper presents a hashing-based technique to organize the codebook so that the search time can be significantly reduced. Hashing gives the speed advantages of a direct search, while maintaining a codebook of reasonable size. Experiments show that hashing-based VQ sustained image quality as the encoding time was reduced, while full search VQ suffered greatly. For example, for 2 x 2 vectors and with 1024 codebook entries, encoding time was reduced by a factor of 10 without significant loss of image quality.
D. M. Bethel, Bath University (UK)
D.M. Monro, Bath University (UK)
We report a novel image coder which is a hybrid of fractal coding and vector quantisation. The approach to image compression is to form an approximate image by one method and clear up errors by another. In this realization, image blocks are approximated by polynomial functions, and the residual image blocks (RIBs) are coded by vector quantisation into a code book which is small enough to transmit with an image. The method is evaluated on a number of parameters, and the results are found to be intermediate between fractal and JPEG coding in their rate/distortion performance. Possible further improvements are indicated.
James B. Farison, University of Toledo (USA)
Mahmoud K. Quweider, University of Toledo (USA)
A novel technique to classify image edge blocks is presented. It is based on defining a set of linearly independent signature vectors with a one to one association with the edge classes. A set of filter vectors emphasizing the projection of one signature vector and suppressing all others is then designed. Classification of an input edge block is accomplished by choosing the index of the filter with the maximum output magnitude. Coded images based on this classification are shown to preserve their quality and enjoy considerable dB gain over two existing methods. The new technique can be easily implemented using a parallel algorithm with little storage requirement.
Mikael Skoglund, Chalmers University of Technology (SWEDEN)
A Hadamard-based framework for soft decoding in vector quantization over a Rayleigh fading channel is presented. We also provide an efficient algorithm for decoding calculations. The system has relatively low complexity, and gives low transmission rate since no redundant channel coding is used. Our image coding simulations indicate that the soft decoder outperforms its hard decoding counterpart. The relative gain is larger for bad channels. Simulations also indicate that encoder training for hard decoding suffices to get good results with the soft decoder.
D. Lebedeff, URA 1376 CNRS
P. Mathieu, URA 1376 CNRS
M. Barlaud, URA 1376 CNRS
C. Lambert-Nebout, CNES
P. Bellemain, Aerospatiale (FRANCE)
This paper proposes an adaptive vector quantization scheme designed for spaceborne raw SAR data compression. This approach is based on the fact that spaceborne raw data are Gaussian distributed, independent, and quite stationary over an interval (in both azimuth and range) which depends on SAR system parameters. The Block Gain Adaptive Vector Quantization (BGAVQ) is a generalization of the Block Adaptive Quantization (BAQ) algorithm to vectors. It operates as a set of optimum vector quantizers (designed by the LBQ algorithm) with different gain settings. The adaptation is particularly efficient since, for a fixed compression ratio, the same codebook is used for any spaceborne SAR data. Results on simulated and real images, for data rate of 1.5 to 2 bit/sample, have confirmed the expected performance of the BGAVQ algorithm.
Faruk M. O. Eryurtlu, University of Surrey (UK)
Ahmet M. Kondoz, University of Surrey (UK)
Barry G. Evans, University of Surrey (UK)
This paper presents a novel video coding algorithm which exploits past frame statistics for entropy coding of the gain-shape VQ parameters. Subband transform has been used to decorrelate the motion compensation residual in order to overcome the block visibility problem. The vectors with samples from different subbands are quantised using a gain-shape VQ technique which allows effective bit rate control. When compared to the H.261 standard, the proposed algorithm has given higher PSNR values at lower bit rates.