Neural Networks: a new method for classification and several applications

Organized by the Complex Systems Modeling Research Focus Area of the Modeling, Algorithms, and Informatics Group (CCS-3).

Armando Vieira, Physics Department, Instituto Superior de Engenharia do Porto, Portugal.

March 18, CNLS Conference Room, 1:00pm-2:00pm

This talk is divided in two parts:

Part I: HLVQ, a new method to train Neural Networks

In this talk I present a new method to train neural networks, called Hidden Layer Learning Vector Quantization HLVQ. It is shown that, for some classification problems, HLVQ outperforms other neural networks approaches as well as Simulated Annealing and Support Vector Machines. HLVQ was developed to analyze Ion Beam data, mainly characterization of implantations by Rutherford BackScattering. Other applications will be presented like characterization of DNA helicases from large T antigen Simian Virus and short term bankruptcy prediction of private companies.

Part II: A multi-objective evolutionary algorithm using approximate fitness evaluations by neural networks.

One of the major difficulties in applying Multi-Objective Evolutionary Algorithms (MOEA) to real problems is the large number of evaluations of the objective functions, of the order of thousands, necessary to obtain an acceptable solution. Often these are time-consuming evaluations obtained by solving numerical codes with expensive methods like finite-differences or finite-elements. Reducing the number of evaluations necessary to reach an acceptable solution is thus of major importance.

In this work we present a method to accelerate the search of a MOEA using Artificial Neural Networks (ANN) to approximate the fitness functions to be optimized. Initially the MOEA runs over a small number of generations. Then a neural network is trained using the evaluations obtained by the evolutionary algorithm. After the ANN is trained the MOEA runs for another set of generations but using the ANN as an approximation to the exact fitness function. As the algorithm evolves the population moves to different regions of the search space and the quality of the approximation performed by the neural network deteriorates. When the error becomes prohibitively high the evolutionary algorithm will proceed using the exact functions. A new training dataset is then collected and used to retrain the ANN. The process continues until an acceptable pareto-front is found.

This method was applied to several benchmark multi-optimization functions and to a real problem as well, namely the optimization of a polymer extrusion process. A reduction up to 50% in the number of exact functions calls was achieved.

Related Links


For more information contact Luis Rocha at rocha@lanl.gov
Last Modified: March 11, 2004