METAHEURISTIC PROCEDURES FOR TRAINING NEURAL NETWORKS provides successful implementations of metaheuristic methods for neural network training. Moreover, the basic principles and fundamental ideas given in the book will allow the readers to create successful training methods on their own. Apart from Chapter 1, in which classical training methods are reviewed for the sake of the book’s completeness, we have classified the chapters in three main categories. The first one is devoted to local search based methods, in which we include Simulated Annealing, Tabu Search, and Variable Neighborhood Search. The second part of the book presents the most effective population based methods, such as Estimation Distribution algorithms, Scatter Search, and Genetic Algorithms. Finally, the third part includes other advanced techniques, such as Ant Colony Optimization, Co-evolutionary methods, GRASP, and Memetic algorithms. All these methods have been shown to work out high quality solutions in a wide range of hard optimization problems. However, the book's objective is engineered to provide a broad coverage of the concepts, methods, and tools of this important area of ANNs within the realm of continuous optimization.