SpacerHome

Spacer
Mirror Sites
Spacer
General Information
Spacer
Confernce Schedule
Spacer
Technical Program
Spacer
     Plenary Sessions
Spacer
     Special Sessions
Spacer
     Expert Summaries
Spacer
     Tutorials
Spacer
     Industry Technology Tracks
Spacer
     Technical Sessions
    
By Date
    March 16
    March 17
    March 18
    March 19
    
By Category
    AE     COMM
    DISPS     DSPE
    ESS     IMDSP
    ITT     MMSP
    NNSP     SAM
    SP     SPEC
    SPTM
    
By Author
        A    B    C    D   
        E    F    G    H   
        I    J    K    L   
        M    N    O    P   
        Q    R    S    T   
        U    V    W    X   
        Y    Z   
Spacer
Tutorials
Spacer
Industry Technology Tracks
Spacer
Exhibits
Spacer
Sponsors
Spacer
Registration
Spacer
Coming to Phoenix
Spacer
Call for Papers
Spacer
Author's Kit
Spacer
On-line Review
Spacer
Future Conferences
Spacer
Help

Abstract: Session NNSP-2

Conference Logo

NNSP-2.1  

PDF File of Paper Manuscript
Global Asymptotic Convergence of Nonlinear Relaxation Equations Realised Through a Recurrent Perceptron
Danilo P Mandic (Signal Processing Section, Department of Electrical Engineering, Imperial College, London), Jonathon A Chambers (Signal Processing Section, Department of Electrical Engineering, Imperial College, London, UK)

Conditions for Global Asymptotic Stability (GAS) of a nonlinear relaxation equation realised by a Nonlinear Autoregressive Moving Average (NARMA) recurrent perceptron are provided. Convergence is derived through Fixed Point Iteration (FPI) techniques, based upon a contraction mapping feature of a nonlinear activation function of a neuron. Furthermore, nesting is shown to be a spatial interpretation of an FPI, which underpins a recently proposed Pipelined Recurrent Neural Network (PRNN) for nonlinear signal processing.


NNSP-2.2  

PDF File of Paper Manuscript
A Neural Network for Data Association
Michel Winter, Gérard Favier (Laboratoire I3S)

This paper presents a new neural solution for solving the data association problem. This problem, also known as the multidimensional assignment problem, arises in data fusion systems like radar and sonar targets tracking, robotic vision... Since it leads to an NP-complete combinatorial optimization, the optimal solution can not be reached in an acceptable calculation time, and the use of approximation methods like the Lagragian relaxation is necessary. In this paper, we propose an alternative approach based on a Hopfield neural model. We show that it converges to an interesting solution that respects the constraints of the association problem. Some simulation results are presented to illustrate the behaviour of the proposed neural solution for an artificial association problem.


NNSP-2.3  

PDF File of Paper Manuscript
Training MLPs Layer-by-layer with the Information Potential
Dongxin Xu, Jose C. Principe (Computational NeuroEngineering Laboratory, Department of Electrical and Computer Engineering, University of Florida)

In the area of information processing one fundamental issue is how to measure the relationship between two variables based only on their samples. In a previous paper, the idea of Information Potential which was formulated from the so called Quadratic Mutual Information was introduced, and successfully applied to problems such as Blind Source Separation and Pose Estimation of SAR (Sythetic Aperture Radar) Images. This paper shows how information potential can be used to train a MLP (multilayer perceptron) layer-by-layer, which provides evidence that the hidden layer of a MLP serves as an "information filter" which tries to best represent the desired output in that layer in the statistical sense of mutual information.


NNSP-2.4  

PDF File of Paper Manuscript
Time Series Prediction via Neural Network Inversion
Lian Yan, David J Miller (The Pennsylvania State University)

In this work, we propose neural network inversion of a backward predictor as a technique for multi-step prediction of dynamic time series. It may be difficult to train a large network to capture the correlation that exists in some dynamic time series represented by small data sets. The new approach combines an estimate obtained from a forward predictor with an estimate obtained by inverting a backward predictor to more efficiently capture the correlation and to achieve more accurate predictions. Inversion allows us to make causal use of prediction backward in time. Also a new regularization method is developed to make neural network inversion less ill-posed. Experimental results on two benchmark series demonstrate the new approach's significant improvement over standard forward prediction, given comparable complexity.


NNSP-2.5  

PDF File of Paper Manuscript
Partial Likelihood for Estimation of Multi-Class Posterior Probabilities
Tulay Adali, Hongmei Ni, Bo Wang (University of Maryland, Baltimore County)

Partial likelihood (PL) provides a unified statistical framework for developing and studying adaptive techniques for nonlinear signal processing [1]. In this paper, we present the general formulation for learning posterior probabilities on the PL cost for multi-class classifier design. We show that the fundamental information-theoretic relationship for learning on the PL cost, the equivalence of likelihood maximization and relative entropy minimization, is satisfied for the multi-class case for the perceptron probability model using softmax [2] normalization. We note the inefficiency of training a softmax network and propose an efficient multi-class equalizer structure based on binary coding of the output classes. We show that the well-formed property of the PL cost [1,7] is satisfied for the softmax and the new multi-class classifier. We present simulation results to demonstrate this fact and note that though the traditional mean square error (MSE) cost uses the available information more efficiently than the PL cost for the multi-class case, the new multi-class equalizer based on binary coding is much more effective in tracking abrupt changes due to the well-formed property of the cost that it uses.


NNSP-2.6  

PDF File of Paper Manuscript
Hybrid Sequential Monte Carlo / Kalman Methods to Train Neural Networks in Non-Stationary Environments
Joao F de Freitas, Mahesan Niranjan, Andrew H Gee (Cambridge University)

In this paper, we propose a novel sequential algorithm for training neural networks in non-stationary environments. The approach is based on a Monte Carlo method known as the sampling-importance resampling simulation algorithm. We derive our algorithm using a Bayesian framework, which allows us to learn the probability density functions of the network weights and outputs. Consequently, it is possible to compute various statistical estimates including centroids, modes, confidence intervals and kurtosis. The algorithm performs a global search for minima in parameter space by monitoring the errors and gradients at several points in the error surface. This global optimisation strategy is shown to perform better than local optimisation paradigms such as the extended Kalman filter.


NNSP-2.7  

PDF File of Paper Manuscript
RECONSTRUCTION AND PREDICTION OF NONLINEAR DYNAMICAL SYSTEMS : A HIERARCHICAL BAYES APPROACH WITH NEURAL NETS
Takashi Matsumoto, Motoki Saito, Yoshinori Nakajima, Junjiro Sugi, Hiroaki Hamagishi (Waseda University)

When nonlinearity is involved, time series prediction becomes a rather difficult task where the conventional linear methods have limited successes for various reasons. One of the greatest challenges stems from the fact that typical observation data is a scalar time series so that dimension of the nonlinear dynamical system (embedding dimension) is unknown. This paper proposes a Hierarchical Bayesian approach to nonlinear time series prediction problems. This class of schemes considers a family of prior distributions parameterized by hyperparameters instead of a single prior so that it enables algorithms less dependent on a particular prior. One can estimate posterior of weight parameters, hyperparameters and embedding dimension by marginalization with respect to the weight parameters and hyperparameters. The proposed scheme is tested against two examples; (i) chaotic time series, and (ii) building air-conditioning load prediction.


NNSP-2.8  

PDF File of Paper Manuscript
Sequential Bayesian Computation of Logistic Regression Models
Mahesan Niranjan (Cambridge University)

The Extended Kalman Filter (EKF) algorithm for identification of a state space model is shown to be a sensible tool in estimating a Logistic Regression Model sequentially. A Gaussian probability density over the parameters of the Logistic model is propagated on a sample by sample basis. Two other approaches, the Laplace Approximation and the Variational Approximation are compared with the state space formulation. Features of the latter approach, such as the possibility of inferring noise levels by maximising the `innovation probability' are indicated. Experimental illustrations of these ideas on a synthetic problem and two real world problems are discussed.


NNSP-1 NNSP-3 >


Last Update:  February 4, 1999         Ingo Höntsch
Return to Top of Page