Minecraft Adventures B2G1 Free
 
 

Special Offers see all

Enter to WIN a $100 Credit

Subscribe to PowellsBooks.news
for a chance to win.
Privacy Policy

Visit our stores


    Recently Viewed clear list


    Q&A | August 3, 2015

    Alice Hoffman: IMG Powell's Q&A: Alice Hoffman



    Describe your latest book. The Marriage of Opposites is a novel about the mother of impressionist painter Camille Pissarro, set on the island of St.... Continue »
    1. $19.59 Sale Hardcover add to wish list

      The Marriage of Opposites

      Alice Hoffman 9781451693591

    spacer
Qualifying orders ship free.
$129.25
New Hardcover
Ships in 1 to 3 days
Add to Wishlist
available for shipping or prepaid pickup only
Available for In-store Pickup
in 7 to 12 days
Qty Store Section
25 Remote Warehouse Mathematics- Probability and Statistics

Other titles in the Wiley Series in Probability and Statistics series:

An Elementary Introduction to Statistical Learning Theory (Wiley Series in Probability and Statistics)

by

An Elementary Introduction to Statistical Learning Theory (Wiley Series in Probability and Statistics) Cover

 

Synopses & Reviews

Publisher Comments:

A thought-provoking look at statistical learning theory and its role in understanding human learning and inductive reasoning

A joint endeavor from leading researchers in the fields of philosophy and electrical engineering, An Elementary Introduction to Statistical Learning Theory is a comprehensive and accessible primer on the rapidly evolving fields of statistical pattern recognition and statistical learning theory. Explaining these areas at a level and in a way that is not often found in other books on the topic, the authors present the basic theory behind contemporary machine learning and uniquely utilize its foundations as a framework for philosophical thinking about inductive inference.

Promoting the fundamental goal of statistical learning, knowing what is achievable and what is not, this book demonstrates the value of a systematic methodology when used along with the needed techniques for evaluating the performance of a learning system. First, an introduction to machine learning is presented that includes brief discussions of applications such as image recognition, speech recognition, medical diagnostics, and statistical arbitrage. To enhance accessibility, two chapters on relevant aspects of probability theory are provided. Subsequent chapters feature coverage of topics such as the pattern recognition problem, optimal Bayes decision rule, the nearest neighbor rule, kernel rules, neural networks, support vector machines, and boosting.

Appendices throughout the book explore the relationship between the discussed material and related topics from mathematics, philosophy, psychology, and statistics, drawing insightful connections between problems in these areas and statistical learning theory. All chapters conclude with a summary section, a set of practice questions, and a reference sections that supplies historical notes and additional resources for further study.

An Elementary Introduction to Statistical Learning Theory is an excellent book for courses on statistical learning theory, pattern recognition, and machine learning at the upper-undergraduate and graduate levels. It also serves as an introductory reference for researchers and practitioners in the fields of engineering, computer science, philosophy, and cognitive science that would like to further their knowledge of the topic.

Book News Annotation:

Statistical learning theory is a relatively new field that has emerged from engineering studies of pattern recognition and machine learning, with inputs from many other areas. Kulkarni (electrical engineering) and Harman (philosophy, both Princeton U.) have been jointly teaching an introductory course in both of their departments that is open to any student. Majors in natural and social sciences, engineering, and the humanities, from freshman to senior, have attended. Among the topics are probability densities, the pattern recognition problem, the nearest neighbor rule, infinite VC (Vapnik-Chervonenkis) dimension, the function estimation problem, and support vector machines. Annotation ©2011 Book News, Inc., Portland, OR (booknews.com)

Synopsis:

A thought-provoking look at statistical learning theory and its role in understanding human learning and inductive reasoning

A joint endeavor from leading researchers in philosophy and electrical engineering, An Elementary Introduction to Statistical Learning Theory is a comprehensive and accessible primer on the rapidly evolving fields of statistical pattern recognition and statistical learning theory. Explaining these areas at a level and in a way that is not often found in other books on the topic, the authors present the basic theory behind contemporary machine learning and uniquely utilize its foundations as a framework for philosophical thinking about inductive inference.

Promoting the fundamental goal of statistical learning, knowing what is achievable and what is not, this book demonstrates the value of a systematic methodology when used along with the needed techniques for evaluating the performance of a learning system. First, an introduction to machine learning is presented that includes brief discussions of applications such as image recognition, speech recognition, medical diagnostics, and statistical arbitrage. To enhance accessibility, two chapters on relevant aspects of probability theory are provided. Subsequent chapters feature coverage of topics such as the pattern recognition problem, optimal Bayes decision rule, the nearest neighbor rule, kernel rules, neural networks, support vector machines, and boosting.

Appendices throughout the book explore the relationship between the discussed material and related topics from mathematics, philosophy, psychology, and statistics, drawing insightful connections between problems in these areas and statistical learning theory. All chapters conclude with a summary section, a set of practice questions, and a reference section that supplies historical notes and additional resources for further study.

An Elementary Introduction to Statistical Learning Theory is an excellent book for courses on statistical learning theory, pattern recognition, and machine learning at the upper-undergraduate and graduate levels. It also serves as an introductory reference for researchers and practitioners in the fields of engineering, computer science, philosophy, and cognitive science that would like to further their knowledge of the topic.

Synopsis:

A joint endeavor from leading researchers in the fields of philosophy and electrical engineering, An Introduction to Statistical Learning Theory provides a broad and accessible introduction to rapidly evolving field of statistical pattern recognition and statistical learning theory. Exploring topics that are not often covered in introductory level books on statistical learning theory, including PAC learning, VC dimension, and simplicity, the authors present upper-undergraduate and graduate levels with the basic theory behind contemporary machine learning and uniquely suggest it serves as an excellent framework for philosophical thinking about inductive inference.

About the Author

SANJEEV KULKARNI, PhD, is Professor in the Department of Electrical Engineering at Princeton University, where he is also an affiliated faculty member in the Department of Operations Research and Financial Engineering and the Department of Philosophy. Dr. Kulkarni has published widely on statistical pattern recognition, nonparametric estimation, machine learning, information theory, and other areas. A Fellow of the IEEE, he was awarded Princeton University's President's Award for Distinguished Teaching in 2007.

GILBERT HARMAN, PhD, is James S. McDonnell Distinguished University Professor in the Department of Philosophy at Princeton University. A Fellow of the Cognitive Science Society, he is the author of more than fifty published articles in his areas of research interest, which include ethics, statistical learning theory, psychology of reasoning, and logic.

Table of Contents

Preface.

1. Introduction: Classification, Learning, Features, Applications.

1.1 Scope.

1.2 Why Machine Learning?

1.3 Some Applications.

1.4 Measurements, Features, and Feature Vectors.

1.5 The Need for Probability.

1.6 Supervised Learning.

1.7 Summary.

1.8 Appendix:  Induction.

1.9 Questions.

1.10 References.

2. Probability.

2.1 Probability of Some Basic Events.

2.2 Probabilities of compound events.

2.3 Conditional probability.

2.4 Drawing without replacement.

2.5 A Classic Birthday Problem.

2.6 Random Variables.

2.7 Expected Value.

2.8 Variance.

2.9 Summary.

2.10 Appendix: Interpretations of Probability.

2.11 Questions.

2.12 References.

3. Probability Densities.

3.1 An Example in Two Dimensions.

3.2 Random Numbers in [0, 1].

3.3 Density Functions.

3.4 Probability Densities in Higher Dimensions.

3.5 Joint and Conditional Densities.

3.6 Expected Value and Variance.

3.7 Laws of Large Numbers.

3.8 Summary.

3.9 Appendix: Measurability.

3.10 Questions.

3.11 References.

4. The Pattern Recognition Problem.

4.1 A Simple Example.

4.2 Decision Rules.

4.3 Success Criterion.

4.4 The Best Classifier: Bayes Decision Rule.

4.5 Continuous Features and Densities.

4.6 Summary.

4.7 Appendix: Uncountably Many.

4.8 Questions.

4.9 References.

5. The Optimal Bayes Decision Rule.

5.1 Bayes Theorem.

5.2 Bayes Decision Rule.

5.3 Optimality and Some Comments.

5.4 An Example.

5.5 Bayes Theorem and Decision Rule With Densities.

5.6 Summary.

5.7 Appendix: Defining Conditional Probability.

5.8 Questions.

5.9 References.

6. Learning from Examples.

6.1 Lack of Knowledge of Distributions.

6.2 Training Data.

6.3 Assumptions on the Training Data.

6.4 A Brute Force Approach to Learning.

6.5 Curse of Dimensionality, Inductive Bias, and No Free Lunch.

6.6 Summary.

6.7 Appendix: What Sort of Learning?

6.8 Questions.

6.9 References.

7. The Nearest Neighbor Rule.

7.1 The Nearest Neighbor Rule.

7.2 Performance of the Nearest Neighbor Rule.

7.3 Intuition and Proof Sketch of Performance.

7.4 Using More Neighbors.

7.5 Summary.

7.6 Appendix: When People Use Nearest Neighbor Reasoning.

7.7 Questions.

7.8 References.

8. Kernel Rules.

8.1 Motivation.

8.2 A Variation on Nearest Neighbor Rules.

8.3 Kernel Rules.

8.4 Universal Consistency of Kernel Rules.

8.5 Potential Functions.

8.6 More General Kernels.

8.7 Summary.

8.8 Appendix: Kernels, Similarity, and Features.

8.9 Questions.

8.10 References.

9. Neural Networks: Perceptrons.

9.1 Multilayer Feed Forward Networks.

9.2 Neural Networks for Learning and Classification.

9.3 Perceptrons.

9.4 Learning Rule for Perceptrons.

9.5 Representational Capabilities of Perceptrons.

9.6 Summary.

9.7 Appendix: Models of Mind.

9.8 Questions.

9.9 References.

10. Multilayer Networks.

10.1 Representation Capabilities of Multilayer Networks.

10.2 Learning and Sigmoidal Outputs.

10.3 Training Error and Weight Space.

10.4 Error Minimization by Gradient Descent.

10.5 Backpropagation.

10.6 Derivation of Backpropagation Equations.

10.7 Summary.

10.8 Appendix: Gradient Descent and Reasoning Toward Reflective Equilibrium.

10.9 Questions.

10.10 References.

11. PAC Learning.

11.1 Class of Decision Rules.

11.2 Best Rule From a Class.

11.3 Probably Approximately Correct Criterion.

11.4 PAC Learning.

11.5 Summary.

11.6 Appendix: Identifying Indiscernibles.

11.7 Questions.

11.8 References.

12. VC Dimension.

12.1 Approximation and Estimation Errors.

12.2 Shattering.

12.3 VC Dimension.

12.4 Learning Result.

12.5 Some Examples.

12.6 Application to Neural Nets.

12.7 Summary.

12.8 Appendix: VC Dimension and Popper Dimension.

12.9 Questions.

12.10 References.

13. Infinite VC Dimension.

13.1 A Hierarchy of Classes and Modified PAC Criterion.

13.2 Misfit Versus Complexity Tradeoff.

13.3 Learning Results.

13.4 Inductive Bias and Simplicity.

13.5 Summary.

13.6 Appendix: Uniform Convergence and Universal Consistency.

13.7 Questions.

13.8 References.

14. The Function Estimation Problem.

14.1 Estimation.

14.2 Success Criterion.

14.3 Best Estimator: Regression Function.

14.4 Learning in Function Estimation.

14.5 Summary.

14.6 Appendix: Regression Toward the Mean.

14.7 Questions.

14.8 References.

15. Learning Function Estimation.

15.1 Review of the Function Estimation/Regression Problem.

15.2 Nearest Neighbor Rules.

15.3 Kernel Methods.

15.4 Neural Networks Learning.

15.5 Estimation with a fixed class of functions.

15.6 Shattering, Pseudo-Dimension, and Learning.

15.7 Conclusion.

15.8 Appendix: Accuracy, Precision, Bias, and Variance in Estimation.

15.9 Questions.

15.10 References.

16. Simplicity.

16.1 Simplicity in Science.

16.2 Ordering Hypotheses.

16.3 Two Examples.

16.4 Simplicity as Simplicity of Representation.

16.5 Pragmatic Theory of Simplicity.

16.6 Simplicity and Global Indeterminacy.

16.7 Summary.

16.8 Appendix: Basic Science and Statistical Learning Theory.

16.9 Questions.

16.10 References.

17. Support Vector Machines.

17.1 Mapping the Feature Vectors.

17.2 Maximizing the Margin.

17.3 Optimization and Support Vectors.

17.4 Implementation and Connection to Kernel Methods.

17.5 Details of the Optimization Problem.

17.6 Summary.

17.7 Appendix: Computation.

17.8 Questions.

17.9 References.

18. Boosting.

18.1 Weak Learning Rules.

18.2 Combining Classifiers.

18.3 Distribution on the Training Examples.

18.4 The AdaBoost Algorithm.

18.5 Performance on Training Data.

18.6 Generalization Performance.

18.7 Summary.

18.8 Appendix: Ensemble Methods.

18.9 Questions.

18.10 References. 

Product Details

ISBN:
9780470641835
Author:
Kulkarni, Sanjeev
Publisher:
John Wiley & Sons
Author:
Harman, Gilber
Author:
T
Author:
Harman, Gilbert
Subject:
Statistics
Subject:
Computational & Graphical Statistics
Subject:
Mathematics | Probability and Statistics
Copyright:
Edition Description:
WOL online Book (not BRO)
Series:
Wiley Series in Probability and Statistics
Series Volume:
853
Publication Date:
20110628
Binding:
HARDCOVER
Language:
English
Pages:
232
Dimensions:
242 x 159 x 17 mm 17.6 oz

Related Subjects


Science and Mathematics » Chemistry » Physical Chemistry
Science and Mathematics » Mathematics » Probability and Statistics » General
Science and Mathematics » Mathematics » Probability and Statistics » Statistics

An Elementary Introduction to Statistical Learning Theory (Wiley Series in Probability and Statistics) New Hardcover
0 stars - 0 reviews
$129.25 In Stock
Product details 232 pages John Wiley & Sons - English 9780470641835 Reviews:
"Synopsis" by , A thought-provoking look at statistical learning theory and its role in understanding human learning and inductive reasoning

A joint endeavor from leading researchers in philosophy and electrical engineering, An Elementary Introduction to Statistical Learning Theory is a comprehensive and accessible primer on the rapidly evolving fields of statistical pattern recognition and statistical learning theory. Explaining these areas at a level and in a way that is not often found in other books on the topic, the authors present the basic theory behind contemporary machine learning and uniquely utilize its foundations as a framework for philosophical thinking about inductive inference.

Promoting the fundamental goal of statistical learning, knowing what is achievable and what is not, this book demonstrates the value of a systematic methodology when used along with the needed techniques for evaluating the performance of a learning system. First, an introduction to machine learning is presented that includes brief discussions of applications such as image recognition, speech recognition, medical diagnostics, and statistical arbitrage. To enhance accessibility, two chapters on relevant aspects of probability theory are provided. Subsequent chapters feature coverage of topics such as the pattern recognition problem, optimal Bayes decision rule, the nearest neighbor rule, kernel rules, neural networks, support vector machines, and boosting.

Appendices throughout the book explore the relationship between the discussed material and related topics from mathematics, philosophy, psychology, and statistics, drawing insightful connections between problems in these areas and statistical learning theory. All chapters conclude with a summary section, a set of practice questions, and a reference section that supplies historical notes and additional resources for further study.

An Elementary Introduction to Statistical Learning Theory is an excellent book for courses on statistical learning theory, pattern recognition, and machine learning at the upper-undergraduate and graduate levels. It also serves as an introductory reference for researchers and practitioners in the fields of engineering, computer science, philosophy, and cognitive science that would like to further their knowledge of the topic.

"Synopsis" by , A joint endeavor from leading researchers in the fields of philosophy and electrical engineering, An Introduction to Statistical Learning Theory provides a broad and accessible introduction to rapidly evolving field of statistical pattern recognition and statistical learning theory. Exploring topics that are not often covered in introductory level books on statistical learning theory, including PAC learning, VC dimension, and simplicity, the authors present upper-undergraduate and graduate levels with the basic theory behind contemporary machine learning and uniquely suggest it serves as an excellent framework for philosophical thinking about inductive inference.
spacer
spacer
  • back to top

FOLLOW US ON...

       
Powell's City of Books is an independent bookstore in Portland, Oregon, that fills a whole city block with more than a million new, used, and out of print books. Shop those shelves — plus literally millions more books, DVDs, and gifts — here at Powells.com.