Synopses & Reviews
Want to tap the power behind search rankings, product recommendations, social bookmarking, and online matchmaking? This fascinating book demonstrates how you can build Web 2.0 applications to mine the enormous amount of data created by people on the Internet. With the sophisticated algorithms in this book, you can write smart programs to access interesting datasets from other web sites, collect data from users of your own applications, and analyze and understand the data once you've found it.
Programming Collective Intelligence takes you into the world of machine learning and statistics, and explains how to draw conclusions about user experience, marketing, personal tastes, and human behavior in general -- all from information that you and others collect every day. Each algorithm is described clearly and concisely with code that can immediately be used on your web site, blog, Wiki, or specialized application. This book explains:
- Collaborative filtering techniques that enable online retailers to recommend products or media
- Methods of clustering to detect groups of similar items in a large dataset
- Search engine features -- crawlers, indexers, query engines, and the PageRank algorithm
- Optimization algorithms that search millions of possible solutions to a problem and choose the best one
- Bayesian filtering, used in spam filters for classifying documents based on word types and other features
- Using decision trees not only to make predictions, but to model the way decisions are made
- Predicting numerical values rather than classifications to build price models
- Support vector machines to match people in online dating sites
- Non-negative matrix factorization to find the independent features in a dataset
- Evolving intelligence for problem solving -- how a computer develops its skill by improving its own code the more it plays a game
Each chapter includes exercises for extending the algorithms to make them more powerful. Go beyond simple database-backed applications and put the wealth of Internet data to work for you.
"Bravo! I cannot think of a better way for a developer to first learn these algorithms and methods, nor can I think of a better way for me (an old AI dog) to reinvigorate my knowledge of the details."
-- Dan Russell, Google
"Toby's book does a great job of breaking down the complex subject matter of machine-learning algorithms into practical, easy-to-understand examples that can be directly applied to analysis of social interaction across the Web today. If I had this book two years ago, it would have saved precious time going down some fruitless paths."
-- Tim Wolters, CTO, Collective Intellect
This book introduces Web developers to the advanced topic of machine learning and statistics in a clear and concise way, with easy-to-follow examples and code that can be used in their own applications.
Popular social networks such as Facebook, Twitter, and LinkedIn generate a tremendous amount of valuable social data. Who's talking to whom? What are they talking about? How often are they talking? Where are they located? This concise and practical book shows you how to answer these types of questions and more. Each chapter presents a soup-to-nuts approach that combines popular social web data, analysis techniques, and visualization to help you find the needles in the social haystack you've been looking for -- and some you didn't know were there.
With Mining the Social Web, intermediate-to-advanced Python programmers will learn how to collect and analyze social data in way that lends itself to hacking as well as more industrial-strength analysis. The book is highly readable from cover to cover and tells a coherent story, but you can go straight to chapters of interest if you want to focus on a specific topic.
- Get a concise and straightforward synopsis of the social web landscape so you know which 20% of the space to spend 80% of your time on
- Use easily adaptable scripts hosted on GitHub to harvest data from popular social network APIs including Twitter, Facebook, and LinkedIn
- Learn how to slice and dice social web data with easy-to-use Python tools, and apply more advanced mining techniques such as TF-IDF, cosine similarity, collocation analysis, document summarization, and clique detection
Facebook, Twitter, and LinkedIn generate a tremendous amount of valuable social data, but how can you find out who's making connections with social media, what theyre talking about, or where theyre located? This concise and practical book shows you how to answer these questions and more. You'll learn how to combine social web data, analysis techniques, and visualization to help you find what you've been looking for in the social haystack, as well as useful information you didn't know existed.
Each standalone chapter introduces techniques for mining data in different areas of the social Web, including blogs and email. All you need to get started is a programming background and a willingness to learn basic Python tools.
- Get a straightforward synopsis of the social web landscape
- Use adaptable scripts on GitHub to harvest data from social network APIs such as Twitter, Facebook, and LinkedIn
- Learn how to employ easy-to-use Python tools to slice and dice the data you collect
- Explore social connections in microformats with the XHTML Friends Network
- Apply advanced mining techniques such as TF-IDF, cosine similarity, collocation analysis, document summarization, and clique detection
"Data from the social Web is different: networks and text, not tables and numbers, are the rule, and familiar query languages are replaced with rapidly evolving web service APIs. Let Matthew Russell serve as your guide to working with social data sets old (email, blogs) and new (Twitter, LinkedIn, Facebook). Mining the Social Web is a natural successor to Programming Collective Intelligence: a practical, hands-on approach to hacking on data from the social Web with Python." --Jeff Hammerbacher
About the Author
Matthew Russell, Vice President of Engineering at Digital Reasoning Systems (http://www.digitalreasoning.com/) and Principal at Zaffra (http://zaffra.com), is a computer scientist who is passionate about data mining, open source, and web application technologies. Hes also the author of Dojo: The Definitive Guide (OReilly).
Table of Contents
Praise for Programming Collective Intelligence; Preface; Prerequisites; Style of Examples; Why Python?; Open APIs; Overview of the Chapters; Conventions; Using Code Examples; How to Contact Us; Safari® Books Online; Acknowledgments; Chapter 1: Introduction to Collective Intelligence; 1.1 What Is Collective Intelligence?; 1.2 What Is Machine Learning?; 1.3 Limits of Machine Learning; 1.4 Real-Life Examples; 1.5 Other Uses for Learning Algorithms; Chapter 2: Making Recommendations; 2.1 Collaborative Filtering; 2.2 Collecting Preferences; 2.3 Finding Similar Users; 2.4 Recommending Items; 2.5 Matching Products; 2.6 Building a del.icio.us Link Recommender; 2.7 Item-Based Filtering; 2.8 Using the MovieLens Dataset; 2.9 User-Based or Item-Based Filtering?; 2.10 Exercises; Chapter 3: Discovering Groups; 3.1 Supervised versus Unsupervised Learning; 3.2 Word Vectors; 3.3 Hierarchical Clustering; 3.4 Drawing the Dendrogram; 3.5 Column Clustering; 3.6 K-Means Clustering; 3.7 Clusters of Preferences; 3.8 Viewing Data in Two Dimensions; 3.9 Other Things to Cluster; 3.10 Exercises; Chapter 4: Searching and Ranking; 4.1 What's in a Search Engine?; 4.2 A Simple Crawler; 4.3 Building the Index; 4.4 Querying; 4.5 Content-Based Ranking; 4.6 Using Inbound Links; 4.7 Learning from Clicks; 4.8 Exercises; Chapter 5: Optimization; 5.1 Group Travel; 5.2 Representing Solutions; 5.3 The Cost Function; 5.4 Random Searching; 5.5 Hill Climbing; 5.6 Simulated Annealing; 5.7 Genetic Algorithms; 5.8 Real Flight Searches; 5.9 Optimizing for Preferences; 5.10 Network Visualization; 5.11 Other Possibilities; 5.12 Exercises; Chapter 6: Document Filtering; 6.1 Filtering Spam; 6.2 Documents and Words; 6.3 Training the Classifier; 6.4 Calculating Probabilities; 6.5 A Naïve Classifier; 6.6 The Fisher Method; 6.7 Persisting the Trained Classifiers; 6.8 Filtering Blog Feeds; 6.9 Improving Feature Detection; 6.10 Using Akismet; 6.11 Alternative Methods; 6.12 Exercises; Chapter 7: Modeling with Decision Trees; 7.1 Predicting Signups; 7.2 Introducing Decision Trees; 7.3 Training the Tree; 7.4 Choosing the Best Split; 7.5 Recursive Tree Building; 7.6 Displaying the Tree; 7.7 Classifying New Observations; 7.8 Pruning the Tree; 7.9 Dealing with Missing Data; 7.10 Dealing with Numerical Outcomes; 7.11 Modeling Home Prices; 7.12 Modeling "Hotness"; 7.13 When to Use Decision Trees; 7.14 Exercises; Chapter 8: Building Price Models; 8.1 Building a Sample Dataset; 8.2 k-Nearest Neighbors; 8.3 Weighted Neighbors; 8.4 Cross-Validation; 8.5 Heterogeneous Variables; 8.6 Optimizing the Scale; 8.7 Uneven Distributions; 8.8 Using Real Data--the eBay API; 8.9 When to Use k-Nearest Neighbors; 8.10 Exercises; Chapter 9: Advanced Classification: Kernel Methods and SVMs; 9.1 Matchmaker Dataset; 9.2 Difficulties with the Data; 9.3 Basic Linear Classification; 9.4 Categorical Features; 9.5 Scaling the Data; 9.6 Understanding Kernel Methods; 9.7 Support-Vector Machines; 9.8 Using LIBSVM; 9.9 Matching on Facebook; 9.10 Exercises; Chapter 10: Finding Independent Features; 10.1 A Corpus of News; 10.2 Previous Approaches; 10.3 Non-Negative Matrix Factorization; 10.4 Displaying the Results; 10.5 Using Stock Market Data; 10.6 Exercises; Chapter 11: EVOLVING INTELLIGENCE; 11.1 What Is Genetic Programming?; 11.2 Programs As Trees; 11.3 Creating the Initial Population; 11.4 Testing a Solution; 11.5 Mutating Programs; 11.6 Crossover; 11.7 Building the Environment; 11.8 A Simple Game; 11.9 Further Possibilities; 11.10 Exercises; Chapter 12: Algorithm Summary; 12.1 Bayesian Classifier; 12.2 Decision Tree Classifier; 12.3 Neural Networks; 12.4 Support-Vector Machines; 12.5 k-Nearest Neighbors; 12.6 Clustering; 12.7 Multidimensional Scaling; 12.8 Non-Negative Matrix Factorization; 12.9 Optimization; Third-Party Libraries; Universal Feed Parser; Python Imaging Library; Beautiful Soup; pysqlite; NumPy; matplotlib; pydelicious; Mathematical Formulas; Euclidean Distance; Pearson Correlation Coefficient; Weighted Mean; Tanimoto Coefficient; Conditional Probability; Gini Impurity; Entropy; Variance; Gaussian Function; Dot-Products; Colophon;