Image Coding

Home
Full List of Titles
1: Speech Processing
CELP Coding
Large Vocabulary Recognition
Speech Analysis and Enhancement
Acoustic Modeling I
ASR Systems and Applications
Topics in Speech Coding
Speech Analysis
Low Bit Rate Speech Coding I
Robust Speech Recognition in Noisy Environments
Speaker Recognition
Acoustic Modeling II
Speech Production and Synthesis
Feature Extraction
Robust Speech Recognition and Adaptation
Low Bit Rate Speech Coding II
Speech Understanding
Language Modeling I
2: Speech Processing, Audio and Electroacoustics, and Neural Networks
Acoustic Modeling III
Lexical Issues/Search
Speech Understanding and Systems
Speech Analysis and Quantization
Utterance Verification/Acoustic Modeling
Language Modeling II
Adaptation /Normalization
Speech Enhancement
Topics in Speaker and Language Recognition
Echo Cancellation and Noise Control
Coding
Auditory Modeling, Hearing Aids and Applications of Signal Processing to Audio and Acoustics
Spatial Audio
Music Applications
Application - Pattern Recognition & Speech Processing
Theory & Neural Architecture
Signal Separation
Application - Image & Nonlinear Signal Processing
3: Signal Processing Theory & Methods I
Filter Design and Structures
Detection
Wavelets
Adaptive Filtering: Applications and Implementation
Nonlinear Signals and Systems
Time/Frequency and Time/Scale Analysis
Signal Modeling and Representation
Filterbank and Wavelet Applications
Source and Signal Separation
Filterbanks
Emerging Applications and Fast Algorithms
Frequency and Phase Estimation
Spectral Analysis and Higher Order Statistics
Signal Reconstruction
Adaptive Filter Analysis
Transforms and Statistical Estimation
Markov and Bayesian Estimation and Classification
4: Signal Processing Theory & Methods II, Design and Implementation of Signal Processing Systems, Special Sessions, and Industry Technology Tracks
System Identification, Equalization, and Noise Suppression
Parameter Estimation
Adaptive Filters: Algorithms and Performance
DSP Development Tools
VLSI Building Blocks
DSP Architectures
DSP System Design
Education
Recent Advances in Sampling Theory and Applications
Steganography: Information Embedding, Digital Watermarking, and Data Hiding
Speech Under Stress
Physics-Based Signal Processing
DSP Chips, Architectures and Implementations
DSP Tools and Rapid Prototyping
Communication Technologies
Image and Video Technologies
Automotive Applications / Industrial Signal Processing
Speech and Audio Technologies
Defense and Security Applications
Biomedical Applications
Voice and Media Processing
Adaptive Interference Cancellation
5: Communications, Sensor Array and Multichannel
Source Coding and Compression
Compression and Modulation
Channel Estimation and Equalization
Blind Multiuser Communications
Signal Processing for Communications I
CDMA and Space-Time Processing
Time-Varying Channels and Self-Recovering Receivers
Signal Processing for Communications II
Blind CDMA and Multi-Channel Equalization
Multicarrier Communications
Detection, Classification, Localization, and Tracking
Radar and Sonar Signal Processing
Array Processing: Direction Finding
Array Processing Applications I
Blind Identification, Separation, and Equalization
Antenna Arrays for Communications
Array Processing Applications II
6: Multimedia Signal Processing, Image and Multidimensional Signal Processing, Digital Signal Processing Education
Multimedia Analysis and Retrieval
Audio and Video Processing for Multimedia Applications
Advanced Techniques in Multimedia
Video Compression and Processing
Image Coding
Transform Techniques
Restoration and Estimation
Image Analysis
Object Identification and Tracking
Motion Estimation
Medical Imaging
Image and Multidimensional Signal Processing Applications I
Segmentation
Image and Multidimensional Signal Processing Applications II
Facial Recognition and Analysis
Digital Signal Processing Education

Author Index
A B C D E F G H I
J K L M N O P Q R
S T U V W X Y Z

TROBIC: Two-Row Buffer Image Compression

Authors:

Viresh Ratnakar,

Page (NA) Paper number 1007

Abstract:

We describe a color image compression and decompression scheme suitable for high resolution printers. The proposed scheme requires only two image rows in memory at any time, and hence is suitable for low-cost, high-resolution printing systems. The compression ratio can be specified and is achieved exactly. Compound document images consisting of continuous-tone, natural regions mixed with synthetic graphics or text are handled with uniformly high quality. While the target compression ratios are moderate, the quality requirements are extremely high: the compressed and decompressed printed image needs to be virtually indistinguishable from the original printed image. The scheme combines a lossless block coding technique with a wavelet block codec. The wavelet block codec uses a new and simple entropy coding technique that is more suitable for the specific block-structure, compression target, and discrete wavelet transform used.

IC991007.PDF (From Author) IC991007.PDF (Rasterized)

TOP


Prediction Based On Backward Adaptive Recognition Of Local Texture Orientation And Poisson Statistical Model For Lossless/Near-Lossless Image Compression

Authors:

Xiaohui Xue, Department of Computer Science, Harbin Institute of Technology, P. R. China, 150001 (China)
Wen Gao, Department of Computer Science, Harbin Institute of Technology, P. R. China, 150001 (China)

Page (NA) Paper number 1112

Abstract:

This paper is devoted to prediction-based lossless/near-lossless image compression algorithm. Within this framework, there are three modules, including prediction model, statistical model and entropy coding. This paper focuses on the former two, and puts forward two new methods respectively, they are, prediction model based on backward adaptive recognition of local texture orientation (BAROLTO), and Poisson statistical model. As far as we know, BAROLTO is the best predictor in efficiency. Poisson model is designed to avoid the context dilution to some extent and make use of large neighborhood; therefore, we can capture more local correlation. Experiments show that our compression system (BP) based on BAROLTO prediction and Poisson model outperforms the products of IBM and HP significantly.

IC991112.PDF (Scanned)

TOP


Encoding Of Image Partitions Using A Standard Technique For Lossless Image Compression

Authors:

Armando J Pinho, DET / INESC, University of Aveiro, Portugal (Portugal)

Page (NA) Paper number 1434

Abstract:

Recently, a new technique for the lossless encoding of boundary maps was introduced, which is based on the concept of "transition points". In this paper we show that, using a simple representation for the transition points, it is possible to use the JBIG image coding standard for the encoding of image partitions. Moreover, this new approach outperforms, in most cases, differential chain-coding both in efficiency and simplicity of implementation.

IC991434.PDF (From Author) IC991434.PDF (Rasterized)

TOP


Generalized Variable Dimensional Set Partitioning For Embedded Wavelet Image Compression

Authors:

Debargha Mukherjee,
Sanjit K Mitra,

Page (NA) Paper number 1482

Abstract:

A vector enhancement of Said and Pearlman's Set Partitioning in Hierarchical Trees (SPIHT) methodology, named VSPIHT, has recently been proposed for embedded wavelet image compression. While the VSPIHT algorithm works better than scalar SPIHT for most images, a common vector dimension to use for coding an entire image may not be optimal. Since statistics vary widely within an image, a greater efficiency can be achieved if different vector dimensions are used for coding the wavelet coefficients from different portions of the image. We present a generalized methodology for developing a variable dimensional set partitioning coder, where different parts of an image may be coded in different vectoring modes, with different scale factors, and upto different number of passes. A Lagrangian rate-distortion criterion is used to make the optimum coding choices. Coding passes are made jointly for the vectoring modes to produce an embedded bitstream.

IC991482.PDF (From Author) IC991482.PDF (Rasterized)

TOP


A Partially Decodable Code for Scalable Compression of Super High Definition Images

Authors:

Madoka Hasegawa,
Shigeo Kato,
Yoshifumi Yamada,

Page (NA) Paper number 1608

Abstract:

Multimedia communication systems using super high definition (SHD) images are widely desired in various communities such as medical imagery, digital museum and libraries and so on. There are, however, many problems in SHD image communication systems, because of high pixel accuracy and high resolution. We considered indispensable functions to be realized in SHD image application systems. These are summarized to three items, i.e., reversibility, scalability and progressibility. This paper proposes a partially decodable coding method for realizing the scalability function. It is to say, when a whole image cannot be displayed on the monitor, a reduced image is displayed at first to select a region of interest. Then image data of the selected region is extracted from the code stream. For this purpose, a kind of partially decodable code should be introduced. We propose a new partially decodable coding method based on Golomb-Rice code.

IC991608.PDF (Scanned)

TOP


Filter Bank Optimization for High-Dimensional Compression of Pre-Stack Seismic Data

Authors:

Tage Røsten,
Viktor A Marthinussen,
Tor A Ramstad,
Andrew Perkis,

Page (NA) Paper number 1642

Abstract:

A multi-dimensional variable-length subband coder for pre-stack seismic data was presented. A 2- and 3-D separable near perfect reconstruction filter bank was optimized to maximize the coding gain, assuming that the correlation properties of pre-stack seismic data can be modeled by directional dependent autoregressive processes. Identical quantization and entropy coder allocation strategies were utilized to isolate the compression efficiency of the different high-dimensional filter bank methods. An example, with compression ratios ranging from 160:1 to 320:1, showed that 3-D subband coding of common shot gathers performed 50% better in terms of bit rate at a given signal-to-noise ratio compared to 2-D subband coding of common shot gathers.

IC991642.PDF (Scanned)

TOP


A Wavelet Based Stereo Image Coding Algorithm

Authors:

Qin Jiang,
Joon J Lee, Department of Computer Engineering, Dongseo University, Pusan 617-716, South Korea. (Korea)
Monson H Hayes III,

Page (NA) Paper number 1840

Abstract:

Stereo image pair coding is an important issue in stereo data compression. A wavelet based stereo image pair coding algorithm is proposed in this paper. The wavelet transform is used to decompose the image into an approximation image and three edge images. In the wavelet domain, a disparity estimation technique is developed to estimate the disparity field using both approximation image and edge images. To improve the accuracy of estimation of wavelet images produced by the disparity compensation technique, a novel wavelet based Subspace Projection Technique(SPT) is developed. In the SPT, the block dependent subspaces are constructed using block varying basis vectors that are derived from disparity compensated wavelet images. Experimental results show that the proposed algorithm is efficient to achieve stereo image compression.

IC991840.PDF (From Author) IC991840.PDF (Rasterized)

TOP


Least Squares Based Decoding for BCH Codes In Image Applications

Authors:

Nikola Rozic, Department of Electronics, University of Split, FESB, R.Boskovica bb., HR-21000 Split, Croatia (Croatia)
Dinko Begusić, Department of Electronics, University of Split, FESB, R.Boskovica bb., HR-21000 Split, Croatia (Croatia)
Jurica Ursic, Department of Electronics, University of Split, FESB, R.Boskovica bb., HR-21000 Split, Croatia (Croatia)

Page (NA) Paper number 2162

Abstract:

BCH codes in the frequency domain provide robust channel coding for image channel coding applications. The underlying problem of estimation of real/complex sinusoids in white additive noise may be formulated and solved in different ways. The standard approach is based on the least squares method and Berlekamp-Massey algorithm (BMA). In this paper we compare the performance of the BMA with other LS based algorithms including: minimum norm solution based algorithm (MNS), forward-backward linear prediction based algorithm (FBLP) and singular-value decomposition based minimum norm alorithm (SVD-MNA). Results of computer experiments show that the introduction of minimum norm solution, forward-backward prediction and the SVD decomposition may significantly improve the performance of the decoder in the case of the relatively low SNR. In selecting between the proposed algorithms a performance/complexity tradeoff has to be considered.

IC992162.PDF (From Author) IC992162.PDF (Rasterized)

TOP


Context Modeling of Wavelet Coefficients in EZW-Based Lossless Image Coding

Authors:

Veeraraghavan N Ramaswamy, Bell Laboratories, Lucent Technologies, Holmdel, NJ, USA (USA)
Kamesh R Namuduri, Center for Theoretical Studies, Clark Atlanta University, Atlanta, GA, USA (USA)
Nagarajan Ranganathan, Dept. of ECE, Univ. of Texas at El Paso, TX, USA (USA)

Page (NA) Paper number 2369

Abstract:

The EZW lossless coding framework consists of three stages: (i) a reversible wavelet transform, (ii) an EZW data structure to order the coefficients and (iii) an arithmetic coding using context modeling. In this work, we discuss the various experiments conducted on context modeling of wavelet coefficients for arithmetic coding to optimize the compression efficiency. The context modeling of wavelet coefficients can be classified into two parts: (i) context modeling of significance information and (ii) context modeling of the remaining or residue information. It was observed from our experiments while context modeling of residue helped in achieving considerable compression efficiency, the context modeling of significance information helped only to a modest extent. Keywords: lossless, image coding, EZW, wavelet, context modeling.

IC992369.PDF (From Author) IC992369.PDF (Rasterized)

TOP


Improving Single-Pass Adaptive VQ

Authors:

Francesco Rizzo,
James A Storer,
Bruno Carpentieri,

Page (NA) Paper number 1955

Abstract:

Costantinescu and Storer in 1993 introduced a single-pass vector quantization algorithm that with no specific traininign or prior knowledge of the data was able to achieve better compression results with respect to the JPEG standard, along with a number of computational advances such as: adjustable fidelity/compression tradeoff, precise guarantees on any lxl sub-block of the image, fast table-lookup decoding. In this paper we improve that basic algorithm by blending it with the mean shape-gain vector quantization (MSGVQ) compression scheme. This blending allows a slightly better performance in terms of compression and a clear improvement in visual quality.

IC991955.PDF (From Author) IC991955.PDF (Rasterized)

TOP