- Reading level: 18+ years
- Paperback: 358 pages
- Publisher: MIT Press (10 September 2014)
- Language: English
- ISBN-10: 0262527014
- ISBN-13: 978-0262527019
- Product Dimensions: 15.2 x 3 x 22.9 cm
- Average Customer Review: Be the first to review this item
- Amazon Bestsellers Rank: #3,83,626 in Books (See Top 100 in Books)
Neural Smithing – Supervised Learning in Feedforward Artificial Neural Networks (A Bradford Book) Paperback – Import, 10 Sep 2014
Customers who bought this item also bought
Customers who viewed this item also viewed
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter mobile phone number.
No customer reviews
|5 star (0%)|
|4 star (0%)|
|3 star (0%)|
|2 star (0%)|
|1 star (0%)|
Review this product
Most helpful customer reviews on Amazon.com
Early in my graduate career I began working with neural networks and discovered this book in a electronic bookshelf available at my university. After printing chapter after chapter to read on subway rides home I ended up buying it for convenience. It gave me the background I needed to code up a basic artificial neural network in C++ and to then extend it to fit my needs.
The style of the writing is the perfect balance of enough detail to understand a concept or method without unnecessary wordiness. Each chapter covers an important aspect of neural network development and application - for exmaple, internode weight initilaization techniques - and acts a sort of mini-review of the most popular methods with a clear explanation of the pros and cons of each.
This is an excellent bookshelf addition for anyone who works with neural networks.
The topics covered are reminicent to those discussed in part 2 and 3 of the Neural Network FAQ. In chapter 6, the relationships between learning rate, momontum, trainig time and learning modes are presented graphically. With this, it helps me to rule out and avoid learning parameters that are unlikely to improve the NN performance. This is especially important if the dataset is large and the NN program is implemented in Java.
If the aim is to develop a NN solution that will give you the best results, I find both chapter 7 (heuristics for weights initialization) and 16 (heuristics for improving generation) are esential and saves me a lot of time from reading many journals.
In summary, this book has helped me to develop the art of NN optimization. It shows me how to visualize decision surface and the various graphical relationships between learning paramters and various components of NN topology. I think you will find this book very useful after your NN program is up and running and you are looking for ideas and explaination on how to improve the NN performance further.