Sigmoidal Feedforward Network Training

Role of Learning Rate and Momentum Constant

Regular price
Sale price
Regular price
Sold out
Unit price
Shipping calculated at checkout.


Sigmoidal feedforward neural networks with one (or more) hidden layer are one of the popular methods for the solution to learning problems. This book proposes a new approach for the training of sigmoidal feedforward network based on random learning rate/momentum constant. As we know the usage of random weight initialization is recommended because if the weights/thresholds are initialized to equal values, the weights/thresholds have a tendency of moving together and thereby restricting the degree of freedom (the number of distinct weights/ thresholds in the network), whereas random weights allow all the weights to have different values. The curvature of error surface is not uniform therefore it is found that symmetry breaking capability of random learning rate/momentum constant with or without the random weight initialization mechanism for the training of networks achieves least minima of the error function. So the aim of the book is to emphasize the optimal choice for random learning rates/momentum constants.


Savita Ahlawat


Dr Savita Ahlawat is currently working as a Reader (CSE Dept.) at MSIT, New Delhi. She has done B.E., M.Tech.(IT) & Ph.D.(CSE) and holds 15 years of teaching experience. She has published around 25 papers in international journals and conferences. Her research interests include Pattern Recognition, Machine Learning, Data Science & Computer Vision.

Number of Pages:


Book language:


Published On:




Publishing House:

LAP LAMBERT Academic Publishing


Neural Network, Feed Forward Neural Networks, Momentum Constant, Artificial Intelligence, Artificial Neural Network (ANN), Sigmoidal feed forward neural network, ANN training

Product category: