Synopses & Reviews
Many claims are made about how certain tools, technologies, and practices improve software development. But which claims are verifiable, and which are merely wishful thinking? In this book, leading thinkers such as Steve McConnell, Barry Boehm, and Barbara Kitchenham offer essays that uncover the truth and unmask myths commonly held among the software development community. Their insights may surprise you.
- Are some programmers really ten times more productive than others?
- Does writing tests first help you develop better code faster?
- Can code metrics predict the number of bugs in a piece of software?
- Do design patterns actually make better software?
- What effect does personality have on pair programming?
- What matters more: how far apart people are geographically, or how far apart they are in the org chart?
Contributors include:
Jorge Aranda
Tom Ball
Victor R. Basili
Andrew Begel
Christian Bird
Barry Boehm
Marcelo Cataldo
Steven Clarke
Jason Cohen
Robert DeLine
Madeline Diep
Hakan Erdogmus
Michael Godfrey
Mark Guzdial
Jo E. Hannay
Ahmed E. Hassan
Israel Herraiz
Kim Sebastian Herzig
Cory Kapser
Barbara Kitchenham
Andrew Ko
Lucas Layman
Steve McConnell
Tim Menzies
Gail Murphy
Nachi Nagappan
Thomas J. Ostrand
Dewayne Perry
Marian Petre
Lutz Prechelt
Rahul Premraj
Forrest Shull
Beth Simon
Diomidis Spinellis
Neil Thomas
Walter Tichy
Burak Turhan
Elaine J. Weyuker
Michele A. Whitecraft
Laurie Williams
Wendy M. Williams
Andreas Zeller
Thomas Zimmermann
Synopsis
No doubt, you've heard many claims about how some tool, technology, or practice improves software development. But which claims are verifiable, and which are merely wishful thinking? In this book, leading thinkers such as Steve McConnell, Barry Boehm, and Barbara Kitchenham offer essays that uncover the truth and unmask myths commonly held among the software development community.
Do different programming languages really make people more productive? Is copy-and-paste programming a bad practice? And why do some people find it so hard to learn how to program? By understanding what facts are real and which claims are pure hype, you'll be better equipped to determine the tools, technologies, and best practices that will best address your needs.
Contributions include:
- Elaine Weyuker and Tom Ostrand: "Where do bugs really come from?"
- Steve McConnell: "What do we know about productivity differences among programmers?"
- Laurie Williams: "Is pair programming really more efficient?"
Making Software is a fascinating book that will open your eyes and help you become a better programmer.
Synopsis
In this book, leading thinkers such as Steve McConnell, Barry Boehm, and Barbara Kitchenham offer essays that uncover the truth and unmask myths commonly held among the software development community.
Synopsis
Many books present architectures, but few ever talk about how they came to be. This book will address the question of what are the ingredients of a robust, elegant, flexible, maintainable architecture? Beautiful Architecture will let readers eavesdrop on some of the best minds in software engineering today. In each chapter, a well-known software engineer will present one of his or her favorite pieces of architecture, then explain what makes that architecture particularly elegant, robust, clever, and fit for its purpose--in other words, beautiful. In Beautiful Architecture, thirty master architects think aloud as they work through their project's architecture, outlining the decisions made and the tradeoffs encountered. Instead of simply providing the answer in the form of presenting architecture, this book takes a look behind the scenes - by sharing the decisions behind the architecture.
About the Author
Diomidis Spinellis is an Associate Professor in the Department of Management Science and Technology at the Athens University of Economics and Business, Greece. His research interests include software engineering, programming languages, internet information systems, computer security, and intelligent optimization methods. He holds an MEng in Software Engineering and a PhD in Computer Science both from Imperial College London.
Spinellis is a FreeBSD committer and the author of many open-source software packages, libraries, and tools. His implementation of the Unix sed stream editor is part of all BSD Unix distributions and Apple's Mac OS X. Other tools he has developed include the UMLGraph declarative UML drawing engine, the ckjm tool for calculating Chidamber and Kemerer object-oriented metrics in large Java programs, the Outwit suite for integrating Windows features with command-line tools, the fileprune backup file management facility, and the socketpipe network plumbing utility. In 2004 he adopted and has since been maintaining and enhancing the popular bib2xhtml BibTeX bibliography format to HTML converter. Currently he is also serving as the scientific coordinator of the EU-funded SQO-OSS cooperative research project, a software quality observatory for open-source software.
Spinellis has published two books in Addison-Wesley's "Effective Programming Series": in 2004 Code Reading: the Open Source Perspective, which received a Software Development Productivity Award in 2004 and has been translated into six other languages, and in 2006 Code Quality: the Open Source Perspective, which also received a Software Development Productivity Award in 2007. Both books use hundreds of examples from large open source systems, like the BSD Unix operating system, the Apache Web server, and the HSQLDB Java database engine, to demonstrate how developers can comprehend, maintain, and evaluate existing software code. Spinellis has also published more than 100 technical papers in journals and refereed conference proceedings. The article "A Survey of Peer-to-Peer Content Distribution Technologies" he co-authored in 2004 appeared in the list of ACM's most downloaded digital library articles throughout 2005 and 2006. He is a member of the editorial board of IEEE Software, authoring the regular "Tools of the Trade" column, and Springer's Journal in Computer Virology.
Spinellis is a member of the ACM, the IEEE, the Usenix Association, the Greek Computer Society, the Technical Chamber of Greece, a founding member of the Greek Internet User's Society, and an active Wikipedian. He is four times winner ofthe International Obfuscated C Code Contest and a member of the crew listed in the Usenix Association 1993 Lifetime Achievement Award.
Georgios Gousios is a researcher by profession, a software engineer by education and a software enthusiast by passion. Currently, he is working on his PhD thesis at the Athens University of Economics and Business, Greece. His research interests include virtual machines, operating systems, software engineering and software quality. He holds an MSc with distinction from the University of Manchester, UK.
Gousios has contributed code to various OSS projects and also worked in various R&D projects in both academic and commercial settings. He is currently the project manager, design authority and core development team member for SQO-OSS, a multinational EU-funded research project, expanding in 5 countries, being developed by 40 people and consisting of 65k lines of code. The project investigates novel ways for evaluating software quality.
In his academic life, Gousios has published 10 technical papers in referred conferences and journals. One of those, the article "A comparison of dynamic web content technologies of the Apache web server" won the best paper award at the 2002 System Administration and Networking Conference, being the first comprehensive study in its field.
Gousios is a member of the ACM, the IEEE, the Usenix Association and the Technical Chamber of Greece.
Table of Contents
Preface; Organization of This Book; Conventions Used in This Book; Safari® Books Online; Using Code Examples; How to Contact Us; General Principles of Searching For and Using Evidence; Chapter 1: The Quest for Convincing Evidence; 1.1 In the Beginning; 1.2 The State of Evidence Today; 1.3 Change We Can Believe In; 1.4 The Effect of Context; 1.5 Looking Toward the Future; 1.6 References; Chapter 2: Credibility, or Why Should I Insist on Being Convinced?; 2.1 How Evidence Turns Up in Software Engineering; 2.2 Credibility and Relevance; 2.3 Aggregating Evidence; 2.4 Types of Evidence and Their Strengths and Weaknesses; 2.5 Society, Culture, Software Engineering, and You; 2.6 Acknowledgments; 2.7 References; Chapter 3: What We Can Learn from Systematic Reviews; 3.1 An Overview of Systematic Reviews; 3.2 The Strengths and Weaknesses of Systematic Reviews; 3.3 Systematic Reviews in Software Engineering; 3.4 Conclusion; 3.5 References; Chapter 4: Understanding Software Engineering Through Qualitative Methods; 4.1 What Are Qualitative Methods?; 4.2 Reading Qualitative Research; 4.3 Using Qualitative Methods in Practice; 4.4 Generalizing from Qualitative Results; 4.5 Qualitative Methods Are Systematic; 4.6 References; Chapter 5: Learning Through Application: The Maturing of the QIP in the SEL; 5.1 What Makes Software Engineering Uniquely Hard to Research; 5.2 A Realistic Approach to Empirical Research; 5.3 The NASA Software Engineering Laboratory: A Vibrant Testbed for Empirical Research; 5.4 The Quality Improvement Paradigm; 5.5 Conclusion; 5.6 References; Chapter 6: Personality, Intelligence, and Expertise: Impacts on Software Development; 6.1 How to Recognize Good Programmers; 6.2 Individual or Environment; 6.3 Concluding Remarks; 6.4 References; Chapter 7: Why Is It So Hard to Learn to Program?; 7.1 Do Students Have Difficulty Learning to Program?; 7.2 What Do People Understand Naturally About Programming?; 7.3 Making the Tools Better by Shifting to Visual Programming; 7.4 Contextualizing for Motivation; 7.5 Conclusion: A Fledgling Field; 7.6 References; Chapter 8: Beyond Lines of Code: Do We Need More Complexity Metrics?; 8.1 Surveying Software; 8.2 Measuring the Source Code; 8.3 A Sample Measurement; 8.4 Statistical Analysis; 8.5 Some Comments on the Statistical Methodology; 8.6 So Do We Need More Complexity Metrics?; 8.7 References; Specific Topics in Software Engineering; Chapter 9: An Automated Fault Prediction System; 9.1 Fault Distribution; 9.2 Characteristics of Faulty Files; 9.3 Overview of the Prediction Model; 9.4 Replication and Variations of the Prediction Model; 9.5 Building a Tool; 9.6 The Warning Label; 9.7 References; Chapter 10: Architecting: How Much and When?; 10.1 Does the Cost of Fixing Software Increase over the Project Life Cycle?; 10.2 How Much Architecting Is Enough?; 10.3 Using What We Can Learn from Cost-to-Fix Data About the Value of Architecting; 10.4 So How Much Architecting Is Enough?; 10.5 Does the Architecting Need to Be Done Up Front?; 10.6 Conclusions; 10.7 References; Chapter 11: Conway's Corollary; 11.1 Conway's Law; 11.2 Coordination, Congruence, and Productivity; 11.3 Organizational Complexity Within Microsoft; 11.4 Chapels in the Bazaar of Open Source Software; 11.5 Conclusions; 11.6 References; Chapter 12: How Effective Is Test-Driven Development?; 12.1 The TDD Pill--What Is It?; 12.2 Summary of Clinical TDD Trials; 12.3 The Effectiveness of TDD; 12.4 Enforcing Correct TDD Dosage in Trials; 12.5 Cautions and Side Effects; 12.6 Conclusions; 12.7 Acknowledgments; 12.8 General References; 12.9 Clinical TDD Trial References; Chapter 13: Why Aren't More Women in Computer Science?; 13.1 Why So Few Women?; 13.2 Should We Care?; 13.3 Conclusion; 13.4 References; Chapter 14: Two Comparisons of Programming Languages; 14.1 A Language Shoot-Out over a Peculiar Search Algorithm; 14.2 Plat_Forms: Web Development Technologies and Cultures; 14.3 So What?; 14.4 References; Chapter 15: Quality Wars: Open Source Versus Proprietary Software; 15.1 Past Skirmishes; 15.2 The Battlefield; 15.3 Into the Battle; 15.4 Outcome and Aftermath; 15.5 Acknowledgments and Disclosure of Interest; 15.6 References; Chapter 16: Code Talkers; 16.1 A Day in the Life of a Programmer; 16.2 What Is All This Talk About?; 16.3 A Model for Thinking About Communication; 16.4 References; Chapter 17: Pair Programming; 17.1 A History of Pair Programming; 17.2 Pair Programming in an Industrial Setting; 17.3 Pair Programming in an Educational Setting; 17.4 Distributed Pair Programming; 17.5 Challenges; 17.6 Lessons Learned; 17.7 Acknowledgments; 17.8 References; Chapter 18: Modern Code Review; 18.1 Common Sense; 18.2 A Developer Does a Little Code Review; 18.3 Group Dynamics; 18.4 Conclusion; 18.5 References; Chapter 19: A Communal Workshop or Doors That Close?; 19.1 Doors That Close; 19.2 A Communal Workshop; 19.3 Work Patterns; 19.4 One More Thing...; 19.5 References; Chapter 20: Identifying and Managing Dependencies in Global Software Development; 20.1 Why Is Coordination a Challenge in GSD?; 20.2 Dependencies and Their Socio-Technical Duality; 20.3 From Research to Practice; 20.4 Future Directions; 20.5 References; Chapter 21: How Effective Is Modularization?; 21.1 The Systems; 21.2 What Is a Change?; 21.3 What Is a Module?; 21.4 The Results; 21.5 Threats to Validity; 21.6 Summary; 21.7 References; Chapter 22: The Evidence for Design Patterns; 22.1 Design Pattern Examples; 22.2 Why Might Design Patterns Work?; 22.3 The First Experiment: Testing Pattern Documentation; 22.4 The Second Experiment: Comparing Pattern Solutions to Simpler Ones; 22.5 The Third Experiment: Patterns in Team Communication; 22.6 Lessons Learned; 22.7 Conclusions; 22.8 Acknowledgments; 22.9 References; Chapter 23: Evidence-Based Failure Prediction; 23.1 Introduction; 23.2 Code Coverage; 23.3 Code Churn; 23.4 Code Complexity; 23.5 Code Dependencies; 23.6 People and Organizational Measures; 23.7 Integrated Approach for Prediction of Failures; 23.8 Summary; 23.9 Acknowledgments; 23.10 References; Chapter 24: The Art of Collecting Bug Reports; 24.1 Good and Bad Bug Reports; 24.2 What Makes a Good Bug Report?; 24.3 Survey Results; 24.4 Evidence for an Information Mismatch; 24.5 Problems with Bug Reports; 24.6 The Value of Duplicate Bug Reports; 24.7 Not All Bug Reports Get Fixed; 24.8 Conclusions; 24.9 Acknowledgments; 24.10 References; Chapter 25: Where Do Most Software Flaws Come From?; 25.1 Studying Software Flaws; 25.2 Context of the Study; 25.3 Phase 1: Overall Survey; 25.4 Phase 2: Design/Code Fault Survey; 25.5 What Should You Believe About These Results?; 25.6 What Have We Learned?; 25.7 Acknowledgments; 25.8 References; Chapter 26: Novice Professionals: Recent Graduates in a First Software Engineering Job; 26.1 Study Methodology; 26.2 Software Development Task; 26.3 Strengths and Weaknesses of Novice Software Developers; 26.4 Reflections; 26.5 Misconceptions That Hinder Learning; 26.6 Reflecting on Pedagogy; 26.7 Implications for Change; 26.8 References; Chapter 27: Mining Your Own Evidence; 27.1 What Is There to Mine?; 27.2 Designing a Study; 27.3 A Mining Primer; 27.4 Where to Go from Here; 27.5 Acknowledgments; 27.6 References; Chapter 28: Copy-Paste as a Principled Engineering Tool; 28.1 An Example of Code Cloning; 28.2 Detecting Clones in Software; 28.3 Investigating the Practice of Code Cloning; 28.4 Our Study; 28.5 Conclusions; 28.6 References; Chapter 29: How Usable Are Your APIs?; 29.1 Why Is It Important to Study API Usability?; 29.2 First Attempts at Studying API Usability; 29.3 If At First You Don't Succeed...; 29.4 Adapting to Different Work Styles; 29.5 Conclusion; 29.6 References; Chapter 30: What Does 10x Mean? Measuring Variations in Programmer Productivity; 30.1 Individual Productivity Variation in Software Development; 30.2 Issues in Measuring Productivity of Individual Programmers; 30.3 Team Productivity Variation in Software Development; 30.4 References; Contributors; Colophon;