Improvements of back propagation algorithm performance by adaptively changing gain, momentum and learning rate
In some practical Neural Network (NN) applications, fast response to external events within enormously short time is highly demanded. However, by using back propagation (BP) based on gradient descent optimization method obviously not satisfy in several application due to serious problems associate w...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Subjects: | |
| Online Access: | http://eprints.uthm.edu.my/2964/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | In some practical Neural Network (NN) applications, fast response to external events within enormously short time is highly demanded. However, by using back propagation (BP) based on gradient descent optimization method obviously not satisfy in several application due to serious problems associate with BP which are slow learning convergence velocity and confinement to shallow minima. Over the years, many improvements and modification of the BP learning algorithm have been reported. In this research, we modified existing BP learning algorithm with adaptive gain by adaptively change the momentum coefficient and learning rate. In learning the patterns, the simulation results indicate that the proposed algorithm can hasten up the convergence behaviour as well as slide the network through shallow local minima compared to conventional BP algorithm. We use five common benchmark classification problems to illustrate the improvement of proposed algorithm. |
|---|