Synopses & Reviews
Celebrated for its easy-to-grasp, student-oriented approach, this book puts research design into a practical context. Emphasizing that research must be internally valid, externally valid, and ethically conducted, Mitchell and Jolley show students how to use the theories of research methods, and give novices the practical advice they need to conduct successful research. Within this framework, the book encourages students to value, create, and conduct ethical research projects. The authors introduce ethical issues in Chapter 1 and discuss ethics throughout the book.
Synopsis
RESEARCH DESIGN EXPLAINED provides students with an appreciation of science's excitement and relevance to psychology, by explaining concepts clearly and using real-life analogies and examples. Authors Mitchell and Jolley help students to develop a true understanding of research design, rather than simply memorizing terms, by focusing on important, fundamental concepts and demonstrating the logic behind research design.
About the Author
After graduating summa cum laude from Washington and Lee University, Mark L. Mitchell received his M.A. and Ph.D. degrees in psychology at The Ohio State University. He is currently a professor at Clarion University.Janina M. Jolley graduated with "Great Distinction" from California State University at Dominquez Hills and earned her M.A. and Ph.D. in psychology from The Ohio State University. She is currently a consulting editor of The Journal of Genetic Psychology and Genetic Psychology Monographs and professor of psychology at Clarion University.
Table of Contents
Preface. 1. PSYCHOLOGY AND SCIENCE. Overview. Why Psychology Uses the Scientific Approach. The Characteristics of Science. The Characteristics of Psychology. The Importance of Science to Psychology. Questions About Applying Techniques From Older Sciences to Psychology. Internal Validity Questions: Did the Treatment Cause a Change in Behavior? Construct Validity Questions: Making the Leap From the Physical World to the Mental World? External Validity Questions: Can the Results Be Generalized? Ethical Questions: Should the Study Be Conducted? Conclusions About the Questions That Researchers Face. Why You Should Understand Research Design. To Understand Psychology. To Read Research. To Evaluate Research. To Protect Yourself From "Quacks." To Be a Better Thinker. To Be Scientifically Literate. To Increase Your Marketability. To Do Your Own Research. Concluding Remarks. Summary. Key Terms. Exercises. Web Supplements. 2. GENERATING AND REFINING RESEARCH HYPOTHESES. Overview. Generating Research Ideas From Common Sense. Generating Research Ideas From Previous Research. Specific Strategies. Conclusions About Generating Research Ideas From Previous Research. Converting an Idea Into a Research Hypothesis. Make It Testable. Make It Supportable. Be Sure to Have a Rationale: How Theory Can Help. Demonstrate Its Relevance: Theory Versus Trivia. Refine It: 10 Time-Tested Tips. Make Sure That Testing the Hypothesis Is Both Practical and Ethical. Changing Unethical and Impractical Ideas Into Research Hypotheses. Make Variables More General. Use Smaller Scale Models of the Situation. Carefully Screen Potential Participants. Use "Moderate" Manipulations. Do Not Manipulate Variables. Concluding Remarks. Summary. Key Terms. Exercises. Web Supplements. 3. READING AND EVALUATING RESEARCH. Overview. Reading for Understanding. Choosing an Article. Reading the Abstract. Reading the Introduction. Reading the Method Section. Reading the Results Section. Reading the Discussion. Developing Research Ideas From Existing Research. The Direct Replication. The Systematic Replication. The Conceptual Replication. The Value of Replications . Extending Research. Concluding Remarks. Summary. Key Terms. Exercises. Web Supplements. 4. MEASURING AND MANIPULATING VARIABLES: RELIABILITY AND VALIDITY. Overview. Choosing a Behavior to Measure. Errors in Measuring Behavior. Overview of Two Types of Measurement Errors: Bias and Random Error. Errors due to the Observer: Bias and Random Error. Errors in Administering the Measure: Bias and Random Error. Errors Due to the Participant: Bias and Random Error. Summary of the Three Sources and Two Types of Measurement Error. Reliability: The (Relative) Absence of Random Error. The Importance of Being Reliable: Reliability as a Prerequisite to Validity. Using Test-Retest Reliability to Assess Overall Reliability: To What Degree Is a Measure "Random Error Free"? Identifying (and Then Dealing With) the Main Source of a Measure's Reliability Problems. Conclusions About Reliability. Beyond Reliability: Establishing Construct Validity. Content Validity: Does Your Test Have the Right Stuff? Internal Consistency Revisited: Evidence That You Are Measuring One Characteristic. Convergent Validation Strategies: Statistical Evidence That You Are Measuring the Right Construct. Discriminant Validation Strategies: Showing That You Are Not Measuring the Wrong Construct. Summary of Construct Validity. Manipulating Variables. Common Threats to a Manipulation's Validity. Evidence Used to Argue for a Manipulation's Construct Validity. Pros and Cons of Three Common Types of Manipulations. Conclusions About Manipulating Variables. Concluding Remarks. Summary. Key Terms. Exercises. Web Supplements. 5. BEYOND RELIABILITY AND VALIDITY: CHOOSING THE BEST MEASURE FOR YOUR STUDY. Overview. Sensitivity: Will the Measure Be Able to Detect the Differences You Need to Detect? Achieving the Necessary Level of Sensitivity. Conclusions About Sensitivity. Scales of Measurement: Will the Measure Allow You to Make the Kinds of Comparisons You Need to Make? The Different Scales of Measurement. Why Our Numbers Do Not Always Measure Up. Which Level of Measurement Do You Need? Conclusions About Scales of Measurement. Ethical and Practical Considerations. Concluding Remarks. Summary. Key Terms. Exercises. Web Supplements. 6. INTRODUCTION TO DESCRIPTIVE METHODS. Overview. Uses and Limitations of Descriptive Methods. Descriptive Research and Causality. Description for Description's Sake. Description for Prediction's Sake. Why We Need Science to Describe Behavior. We Need Scientific Measurement. We Need Systematic, Scientific Record-Keeping. We Need Objective Ways to Determine If Variables Are Related. We Need Scientific Methods to Generalize From Experience. Conclusions About the Need for Descriptive Research. Sources of Data. Ex Post Facto Data: Data You Previously Collected. Archival Data. Observation. Tests. Describing Data From Correlational Studies. Graphing Data. Correlation Coefficients. The Coefficient of Determination. Summary of Describing Correlational Data. Making Inferences From Correlational Data. Analyses Based on Correlation Coefficients. Analyses Not Involving Correlation Coefficients. Interpreting Significant Results. Interpreting Null (Nonsignificant) Results. Concluding Remarks. Summary. Key Terms. Exercises. Web Supplements. 7. SURVEY RESEARCH. Overview. Questions to Ask Before Doing Survey Research. What Is Your Hypothesis? Can Self-Report Provide Accurate Answers? To Whom Will Your Results Apply? Conclusions About the Advantages and Disadvantages of Survey Research. The Advantages and Disadvantages of Different Survey Instruments. Written Instruments. Interviews. Planning a Survey. Deciding on a Research Question. Choosing the Format of Your Questions. Choosing the Format of Your Survey. Editing Questions: Nine Mistakes to Avoid. Sequencing Questions. Putting the Final Touches on Your Survey Instrument. Choosing a Sampling Strategy. Administering the Survey. Analyzing Survey Data. Summarizing Data. Using Inferential Statistics. Concluding Remarks. Summary. Key Terms. Exercises. Web Supplements. 8. INTERNAL VALIDITY. Overview. Problems With Two-Group Designs. Why We Never Have Identical Groups. Conclusions About Two-Group Designs. Problems With the Pretest-Posttest Design. Three Reasons Participants May Change Between Pretest and Posttest. Three Measurement Changes That May Cause Scores to Change Between Pretest and Posttest. Conclusions About Trying to Keep Everything Except the Treatment Constant. Ruling out Extraneous Variables. Accounting for Extraneous Variables. Identifying Extraneous Variables. The Relationship Between Internal and External Validity. Concluding Remarks. Summary. Key Terms. Exercises. Web Supplements. 9. THE SIMPLE EXPERIMENT. Overview. Logic and Terminology. Experimental Hypothesis: The Treatment Has an Effect. Null Hypothesis: The Treatment Does Not Have an Effect. Conclusions About Experimental and Null Hypotheses. Manipulating the Independent Variable. Experimental and Control Groups: Similar, but Treated Differently. The Value of Independence: Why Control and Experimental Groups Shouldn't Really Be "Groups." The Value of Assignment (Manipulating the Treatment.) Collecting the Dependent Variable. The Statistical Significance Decision: Deciding Whether to Declare That a Difference Is Not a Coincidence. Statistically Significant Results: Declaring That the Treatment Has a Reliable Effect. Null Results: Why We Can't Draw Conclusions From Nonsignificant Results. Summary of the "Ideal" Simple Experiment. Errors in Determining Whether Results Are Statistically Significant. Type 1 Errors: "Crying Wolf." Type 2 Errors: "Failing to Announce the Wolf." The Need to Prevent Type 2 Errors: Why You Want the Power to Find Significant Differences. Statistics and the Design of the Simple Experiment. Power and the Design of the Simple Experiment. Conclusions About How Statistical Considerations Impact Design Decisions. Nonstatistical Considerations and the Design of the Simple Experiment. External Validity Versus Power. Construct Validity Versus Power. Ethics Versus Power. Analyzing Data From the Simple Experiment: Basic Logic. Estimating What You Want to Know: Your Means Are Sample Means. Why We Must Do More Than Subtract the Means From Each Other. How Random Error Affects Data From the Simple Experiment. When Is a Difference Too Big to Be Due to Random Error? Analyzing the Results of the Simple Experiment: The t-Test. Making Sense of the Results of a t test. Assumptions of the t Test. Questions Raised by Results. Questions Raised by Nonsignificant Results. Questions Raised by Significant Results. Concluding Remarks. Summary. Key Terms. Exercises. Web Supplements. 10. EXPANDING THE SIMPLE EXPERIMENT: THE MULTIPLE-GROUP EXPERIMENT. Overview. The Advantages of Using More Than Two Values of an Independent Variable. Comparing More Than Two Kinds of Treatments. Comparing Two Kinds of Treatments With No Treatment. Comparing More Than Two Levels (Amounts) of an Independent Variable to Increase External Validity. Using Multiple Groups to Improve Construct Validity. Analyzing Data From Multiple-Group Experiments. Analyzing Results From the Multiple-Group Experiment: An Intuitive Overview. Analyzing Results From a Multiple-Group Experiment: A Closer Look. Concluding Remarks. Summary. Key Terms. Exercises. Web Supplements. 11. EXPANDING THE SIMPLE EXPERIMENT: FACTORIAL DESIGNS. Overview. The 2 X2 Factorial Experiment 260. How One Experiment Can Do as Much as Two. How One Experiment Can Do More Than Two. Why You Want to Look for Interactions: The Importance of Moderating Variables. Example of Questions You Can Answer Using the 2 X 2 Factorial Experiment. Potential Results of a 2 X 2 Factorial Experiment. A Main Effect and No Interaction. Two Main Effects and No Interaction. Two Main Effects and an Interaction. No Main Effect and an Interaction. One Main Effect and an Interaction. No Main Effects and No Interaction. Analyzing the Results From a Factorial Experiment. What Degrees of Freedom Tell You. What F and p Values Tell You. What Main Effects Tell You: On the Average, the Factor Had an Effect. What Interactions Usually Tell You: Combining Factors Leads to Effects That Differ From the Sum of the Individual Effects. Interpreting Ordinal and Disordinal Interactions. Ordinal interactions May Mean That the Amount of Effect One Treatment Has Depends on the Level of the Other Treatment. Ordinal Interactions May Be Measurement-Induced Mirages. When to Suspect That Your Ordinal Interaction Is An Artifact of Having Ordinal Data. Crossover (Disordinal) Interactions Mean That The Type of Effect One Treatment Has Depends on the Level of the Other Treatment. Conclusions About Interpreting Ordinal and Disordinal Interactions. Putting the 2 X 2 to Work. Adding a Replication Factor to Increase Generalizability. Using an Interaction to Find an Exception to the Rule: Looking at a Potential Moderating Factor. Using Interactions to Create New Rules. Hybrid Design: Factorial Designs That Allow You to Study Nonexperimental Variables. Increasing Generalizability. Studying Effects of Similarity: The Matched Factors Design. Finding an Exception to the Rule: The Moderating Factor Design. Boosting Power: The Blocked Design Concluding Remarks. Summary. Key Terms. Exercises. Web Supplements. 12. MATCHED PAIRS, WITHIN-SUBJECTS, AND MIXED DESIGNS. Overview. The Matched-Pairs Design. Procedure. Considerations in Using Matched-Pairs Designs. Analysis of Data. Summary of the Matched-Pairs Design. Within-Subjects (Repeated Measures) Designs. Considerations in Using Within-Subjects Designs. Dealing With Order Effects. Randomized Within-Subjects Designs. Procedure. Analysis of Data. Summary of Randomized Within-Subjects Designs. Counterbalanced Within-Subjects Designs. Procedure. Advantages and Disadvantages of Counterbalancing. Conclusions About Counterbalanced Within-Subjects Designs. Choosing Designs. Choosing Designs: The Two-Conditions Case. Choosing Designs: When You Have More Than One Independent Variable. Concluding Remarks. Summary. Key Terms. Exercises. Web Supplements. 13. SINGLE-N DESIGNS AND QUASI-EXPERIMENTS. Overview. Inferring Causality in Randomized Experiments. Establishing Covariation. Establishing Temporal Precedence. Battling Spuriousness. Single-n Designs. Battling Spuriousness by Keeping Nontreatment Factors Constant: The A-B Design. Variations on the A-B Design. Evaluation of Single-n Designs. Conclusions About Single-n Designs. Quasi-Experiments. Battling Spuriousness by Accounting for--Rather than Controlling-- Nontreatment Factors. Time-Series Designs. The Nonequivalent Control-Group Design. Conclusions About Quasi-Experimental Designs. Concluding Remarks. Summary. Key Terms. Exercises. Web Supplements. 14. PUTTING IT ALL TOGETHER: WRITING RESEARCH PROPOSALS AND REPORTS. Overview. Aids to Developing Your Idea. The Research Journal. The Research Proposal. Writing the Research Proposal. General Strategies for Writing the Introduction. Specific Strategies for Writing Introduction Sections for Different Types of Studies. Writing the Method Section. Writing the Results Section. Writing the Discussion Section. Putting on the Front and Back. Writing the Research Report. What Stays the Same or Changes Very Little. Writing the Results Section. Writing the Discussion Section. Concluding Remarks. Summary. Key Terms. Web Supplements. Appendix A: APA's Ethical Code: Implications for Planning and Conducting Research. Appendix B: Searching the Literature. Appendix C: Sample APA Style Paper. Appendix D: A Beginning Researcher's Guide to Statistics. Appendix E: Statistical Tables. Glossary. References. Credits. Index.