Session: NEURAL-P1
Time: 1:00 - 3:00, Tuesday, May 8, 2001
Location: Exhibit Hall Area 5
Title: Neural Networks: Algorithms and Applications
Chair: Takashi Matsumoto

1:00, NEURAL-P1.1
IMPULSES AND STOCHASTIC ARITHMETIC FOR SIGNAL PROCESSING
J. KEANE, L. ATLAS
The work described in this paper explores the use of Poisson point processes and stochastic arithmetic to perform signal processing functions. Our work is inspired by the asynchrony and fault tolerance of biological neural systems. The essence of our approach is to code the input signal in the rate parameter of a Poisson point process, perform stochastic computing operations on the signal in the arrival or "pulse" domain, and decode the output signal by estimating the rate of the resulting process. An analysis of the Poisson pulse frequency modulation encoding error is performed. Asynchronous, stochastic computing operations are applied to the impulse stream and analyzed. A special finite impulse response (FIR) filtering scheme is proposed that preserves the Poisson properties and allows filters to be cascaded without compromising the ideal signal statistics.

1:00, NEURAL-P1.2
FUZZY ANISOTROPIC DIFFUSION FOR SPECKLE FILTERING
S. AJA, C. ALBEROLA, J. RUIZ
An anisotropic diffusion filter contoled by fuzzy rules is presented. The proposed filter is based in the Perona-Malik technique, using a fuzzy reasoning to calculate de diffusion coefficient which controls the whole diffusion. The method has the advantage that it can be used for both smoothing and noise cleaning, as well as edge enhancement. This new aproach also allows ut to model the diffusion process through a rule base to have a better performance. Some examples are given to ilustrate the effectiveness of the proposed technique.

1:00, NEURAL-P1.3
DUAL NU-SUPPORT VECTOR MACHINE WITH ERROR RATE AND TRAINING SIZE BIASING
H. CHEW, R. BOGNER, C. LIM
Support Vector Machines (SVMs) have been successfully applied to classification problems. The difficulty in selecting the most effective error penalty has been partly resolved with nu-SVM. However, the use of uneven training class sizes, which occurs frequently with target detection problems, results in machines with biases towards the class with the larger training set. We propose an extended nu-SVM to counter the effects of the unbalanced training class sizes. The resulting Dual nu-SVM provides the facility to counter these effects, as well as to adjust the error penalties of each class separately. The parameter nu of each class provides a lower bound to the fraction of support vector of that class, and the upper bound to the fraction of bounded support vector of that class. These bounds allow the control on the error rates allowed for each class, and enable the training of machines with specific error rate requirements.

1:00, NEURAL-P1.4
FAST PRINCIPAL COMPONENT EXTRACTION BY A HOMOGENEOUS NEURAL NETWORK
S. OUYANG, Z. BAO
On the basis of the concepts of both weighted subspace criterion and information maximization, this paper proposes a weighted information criterion (WINC) for searching for the optimal solution of a homogeneous neural network. We develop two adaptive algorithms based on the WINC for extracting in parallel multiple principal components. The both algorithms are be able to provide an adaptive step size which leads to a significant improvement in the learning performance. Furthermore, the recursive least squares (RLS) version of WINC algorithms has a low computational complexity , where N is the input vector dimension and p is the number of desired principal components. Since the weighting matrix does not require an accurate value, it facilitates the system design of the WINC algorithm for real applications. Simulation results are provided to illustrate the effectiveness of WINC algorithms for principal component analysis.

1:00, NEURAL-P1.5
A NEW FEEDFORWARD NEURAL NETWORK HIDDEN LAYER NEURON PRUNING ALGORITHM
F. FNAIECH, N. FNAIECH, M. NAJIM
A new approach to determine the number of hidden units of feedforward neural network (FNN) is proposed. FNN could be represented by a Volterra series such as a nonlinear input-output model. The new algorithm proposed is based on the following three steps: first, we develop the nonlinear activation function of the hidden layer neurons in a Taylor expansion, secondly we express the neural network output as a NARX (nonlinear auto regressive with exogenous input) model and finally, by appropriately using the nonlinear order selection algorithm proposed by Kortmann-Unbehauen, we select the most relevant signals on the NARX model obtained. Starting from the output layer, this pruning procedure is performed on each node in each layer. Over various initial conditions and using this new algorithm with the standard backpropagation (SBP), we show reduction in the nonsignificant network hidden layer neurons.

1:00, NEURAL-P1.6
COMPLEX BACKPROPAGATION NEURAL NETWORK USING ELEMENTARY TRANSCENDENTAL FUNCTIONS
T. ADALI, T. KIM
Designing a neural network (NN) for processing complex signals is a challenging task due to the lack of bounded and differentiable nonlinear activation functions in the entire complex domain C. To avoid this difficulty, 'splitting', i.e., using uncoupled real sigmoidal functions for the real and imaginary components has been the traditional approach, and a number of fully complex activation functions introduced can only correct for magnitude distortion but can not handle phase distortion. We have recently introduced a fully complex NN that uses a hyperbolic tangent function defined in the entire complex domain and showed that for most practical signal processing problems, it is sufficient to have an activation function that is bounded and differentiable almost everywhere in the complex domain. In this paper, the fully complex NN design is extended to employ other complex activation functions of the hyperbolic, circular, and their inverse function family. They are shown to successfully restore nonlinear amplitude and phase distortions of non-constant modulus modulated signals.

1:00, NEURAL-P1.7
OPTIMIZED NEURAL NETWORKS FOR MODELING OF LOUDSPEAKER DIRECTIVITY DIAGRAMS
E. WILK, J. WILK
For electro-acoustical simulation of sound reinforcement systems, calculation of sound field distribution requires the frequency dependent directivity patterns of the used loudspeakers. We use neural networks and a new adaptation rule with improved convergence behavior for modeling directivity diagrams. This reduces storage place and simulation time and increases simulation accuracy.

1:00, NEURAL-P1.8
A NEW OPTIMIZING PROCEDURE FOR NU-SUPPORT VECTOR REGRESSOR
F. PÉREZ-CRUZ, A. ARTÉS-RODRÍGUEZ
We present a novel approach to solve the nu-SVR. It is based on an Iterative Re-Weighted Least Squares (IRWLS) procedure, which is simple to implement and can be tuned to the usual nu-SVR solution. The IRWLS procedure is much more efficient (computational load) than Quadratic Programming techniques, which are usually employed to solve it.

1:00, NEURAL-P1.9
SUBBANDS AUDIO SIGNAL RECOVERING USING NEURAL NONLINEAR PREDICTION
G. COCCHI, A. UNCINI
Audio signal recovering is a common problem in digital audio restoration field, because of corrupted samples that must be replaced. In this paper a subbands architecture is presented for audio signal recovering, using neural nonlinear prediction based on adaptive spline neural networks. The experimental results, showing the mean square reconstruction error, and maximum error obtained with increasing gap length, from 200 to 5000 samples. The method gives good results letting reconstruction of over 100ms signal with low audible effects in overall quality. URL - http://infocom.uniroma1.it/aurel