Author: Cheek J.   Oby E.  

Tags: engineering   design  

Year: 2023

Text
                    
Praise for this Book This is one of the clearest and most accessible research methods books I have read as a scholar. I highly recommend for both students and teachers alike. It is an excellent survey of both qualitative and quantitative research methods. —Natalie Danielle Baker, Sam Houston State University Students writing a thesis or dissertation will benefit from this text’s approach to the development of and design of research. It provides a strong theoretical understanding of the process of moving from an idea to a research proposal to the final research project. A good addition to the library of someone who is doing research. —Hugh Clark, Florida Gulf Coast University This book provides a detailed outline of research methods. The authors break the concepts and content into a manner in which it is easily understood by the reader. They also provide activities for the reader to use to apply the content. —Jaimee L. Hartenstein, University of Central Missouri The authors describe the complexities of research and clearly describe the many details that go into a well-designed and executed study. They have done an amazing job describing the process of research design. Any student learning about research will benefit from this book. —Lauren Hays, University of Central Missouri This is an excellent text for those instructors who recognize the importance of giving research design more comprehensive treatment than is typical of most textbooks. —Scott Liebertz, University of South Alabama I will add this book as required reading for my course and personally add this to my collection of research material to aid in my writing and evaluation projects. —Sherill Morris-Francis, Mississippi Valley State University This text is a very approachable conversation-based introduction to research design for social science students. It provides an inviting approach to the research process that draws the reader in as a participant instead of a spectator. It’s focus on research as an iterative and reflexive process rather than a linear set of steps is refreshing. It engages the reader in the process and introduces questions to assist in understanding the complexity and interconnectivity of each part of the process. —Isla A. Schuchs Carr, Texas A&M University-Corpus Christi

Research Design
Sara Miller McCune founded SAGE Publishing in 1965 to support the dissemination of usable knowledge and educate a global community. SAGE publishes more than 1,000 journals and over 600 new books each year, spanning a wide range of subject areas. Our growing selection of library products includes archives, data, case studies, and video. SAGE remains majority owned by our founder and after her lifetime will become owned by a charitable trust that secures the company’s continued independence. Los Angeles | London | New Delhi | Singapore | Washington DC | Melbourne
Research Design Why Thinking About Design Matters Julianne Cheek Elise Øby
FOR INFORMATION: Copyright © 2023 by SAGE Publications, Inc. SAGE Publications, Inc. 2455 Teller Road Thousand Oaks, California 91320 E-mail: order@sagepub.com All rights reserved. Except as permitted by U.S. copyright law, no part of this work may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without permission in writing from the publisher. SAGE Publications Ltd. 1 Oliver’s Yard 55 City Road London, EC1Y 1SP United Kingdom SAGE Publications India Pvt. Ltd. B 1/I 1 Mohan Cooperative Industrial Area Mathura Road, New Delhi 110 044 India All third party trademarks referenced or depicted herein are included solely for the purpose of illustration and are the property of their respective owners. Reference to these trademarks in no way indicates any relationship with, or endorsement by, the trademark owner. Printed in [the United States of America / Canada] Library of Congress Control Number: 2022914343 SAGE Publications Asia-Pacific Pte. Ltd. 18 Cross Street #10-10/11/12 China Square Central Singapore 048423 Acquisitions Editor: Helen Salmon Product Associate: Audra Bacon Production Editor: Astha Jaiswal Copy Editor: Diane DiMura Typesetter: diacriTech Proofreader: Barbara Coster Cover Designer: Gail Buschman Marketing Manager: Victoria Velasquez This book is printed on acid-free paper. 22 23 24 25 26 10 9 8 7 6 5 4 3 2 1
BRIEF CONTENTS Preface xix Acknowledgments xxvii About the Authors xxix Chapter 1 Research Design 1 Chapter 2 Ethical Issues in Research Design 27 Chapter 3 Developing Your Research Questions 47 Chapter 4 Why Methodology Matters When Designing Research 69 Chapter 5 Qualitative and Quantitative Approaches to Designing Research 91 Chapter 6 Obtaining Data Using Qualitative Approaches 119 Chapter 7 Analyzing and Interpreting Qualitative Data 151 Chapter 8 Foundational Design Issues When Using Quantitative Methods 181 Collecting Data Using Quantitative Methods 211 Chapter 9 Chapter 10 Designing Research Using Mixed Methods 241 Chapter 11 Why Knowing and Declaring Your Research Design Hand Matters 271 Glossary 289 References 299 Index 313

CONTENTS Preface xix Acknowledgments xxvii About the Authors xxix Chapter 1 Research Design 1 Purposes and goals of the chapter 1 Introduction: What Is Research Design? 2 Designing Research Is an Iterative Process 2 Research Design as a Messy, Complex, and Demanding Thought-Driven Process 4 Research Design: Working With the Literature 5 Using Relevant Literature When Designing Research Working With the Literature Is Not the Same as Simply Reviewing It 5 7 How Do You Make Decisions About Which Literature to Trust or Rely On? 8 Journal Articles Books and Book Chapters Other Types of Literature That Might Be Useful if Used With Care Research Design: Considering Methodology and Methods Methods 9 10 11 12 13 Research Design: Considering Theory 14 The Importance of Reflexive Thinking When Designing Research 18 What Does Reflexivity Mean? Putting Reflexive Thinking Into Practice When Designing Research Ethics: Much More Reflexive Thinking Still to Do 18 19 20 Conclusions 21 Summary of Key Points 22 Key Research-Related Terms Introduced in This Chapter 23 Supplemental Activities 23 Further Readings 24 Notes 24 Chapter 2 Ethical Issues in Research Design 27 Purposes and Goals of the Chapter 27 What Is Research Ethics? 28 Putting Informed Consent Into Practice 29 Informed Consent—Who, What, and When Informed Consent in Relation to “Vulnerable” Populations 30 31
x  Research Design Putting Confidentiality and Anonymity Into Practice The Use of a Pseudonym Does Not Necessarily Ensure Anonymity 32 33 What You Need to Think About When Reusing, Repurposing, and Sharing Data 35 How to Address These Types of Questions? 36 What You Need to Think About When Using Information on the Internet as Data 37 Blurring the Boundary Between Public and Private Working With Ethics Committees Focusing on the Principles, Not the Requirements 37 39 40 Conclusions 41 Summary of Key Points 43 Key Research-Related Terms Introduced in This Chapter 43 Supplemental Activities 43 Further Readings 45 Notes 45 Chapter 3 Developing Your Research Questions 47 Purposes and Goals of the Chapter 47 Bringing Research Questions Into Focus 48 Feasibility Considerations 51 Putting the Idea of “Think Big, Plan Big, but Do a Small, Well-Contained Study”3 Into Practice. 51 Using the Literature When Developing Research Questions 53 What Is Missing in the Existing Body of Knowledge in the Literature Related to Your Problem? Caution: Mind the Gap Beyond the Gap Different Forms of Reasoning and How They Shape the Form That Research Questions Take Deductive Reasoning Inductive Reasoning 53 54 54 55 56 57 Putting Iterative and Reflexive Research Question Development Into Practice—Learning From Others 58 Scratching the Underbelly of Research Design: Developing Clear Research Question(s). Reflections by Maxi Miciak and Christine Daum.7 59 A Bit About Us, Our Projects, and What Makes Us “Qualified” to Write This. In the Beginning There Was . . . Generating the Question(s) Embracing Rather Than Running From Critique Landing on a Question(s) Key Messages 59 60 60 63 64 64 Conclusions 65 Summary of Key Points 66 Key Research-Related Terms Introduced in This Chapter 66 Supplemental Activities 67 Further Readings 67 Notes 67
Contents  xi Chapter 4 Why Methodology Matters When Designing Research 69 Purposes and Goals of the Chapter 69 Thinking Methodologically 70 Data: A Concept Shaped by Methodological Assumptions 71 The Importance of Bringing Methodological Considerations Related to Data Into Focus 72 Paradigms: Sets of Basic Beliefs That Guide Methodological Thinking 74 Onto-Epistemological Derived Assumptions Underpin Methodological Thinking 76 Inquiry Paradigms and How They Connect to Methodological Thinking 78 Positivism Critiques of Positivism Post-Positivism Constructivism Inquiry Paradigms Affect Thinking About Whether Research Is Credible Why Is Thinking About Your Paradigmatic Stance Important? 78 80 80 81 82 84 The Importance of Asking Methodological Questions of Your Research Design 85 Avoiding the Misuse of Methods 86 Conclusions 86 Summary of Key Points 87 Key Research-Related Terms Introduced in This Chapter 88 Supplemental Activities 88 Further Readings 90 Notes 90 Chapter 5 Qualitative and Quantitative Approaches to Designing Research 91 Purposes and Goals of Chapter 91 Qualitative and Quantitative Research Strategies 92 Qualitative and Quantitative Approaches Reflect Different Research Purposes A Word of Caution Qualitative and Quantitative Approaches Reflect Different Logic of Inquiry Quantitative Approaches Employ Deductive Reasoning Qualitative Approaches Predominantly Draw on Inductive Reasoning Another Word of Caution Rounding Off Our Introductory Discussion of Qualitative and Quantitative Research Strategies Common Features Associated With Quantitative and Qualitative Approaches Common Features Associated With Quantitative Ways of Thinking When Designing Research Common Features Associated With Qualitative Ways of Thinking When Designing Research Variation Within Quantitative and Qualitative Research Approaches Quantitative Inquiry as a Diverse Approach Quantitative Approaches Vary in the Methods That They Use Capturing the Variety in Quantitative Approaches 92 93 93 93 94 95 96 96 96 98 101 101 103 103
xii  Research Design Qualitative Research Approaches as Diverse Strategies of Inquiry More Specialized Forms of Qualitative Research Ethnography as an Example of a Specialist Qualitative Approach Discourse Analysis—Another Form of Specialized Qualitative Inquiry How Many Specialist Types of Qualitative Inquiry Are There and What Are They? Capturing the Variety of Qualitative Approaches Summing Up: Design Considerations in Light of the Variety Within Qualitative Research 105 106 106 107 108 109 111 Which Are Better: Qualitative or Quantitative Research Approaches? 112 Conclusions 114 Summary of Key Points 115 Key Research-Related Terms Introduced in This Chapter 116 Supplemental Activities 116 Further Readings 117 Notes 117 Chapter 6 Obtaining Data Using Qualitative Approaches 119 Purpose and Goals of the Chapter 119 Qualitative Methods Are Not Stand-Alone Data Collection Techniques 121 Different Qualitative Methods Use Different Strategies of Inquiry 122 Key Questions to Ask Yourself When Choosing Types of Qualitative Methods or Strategies of Inquiry Navigating the Diversity Between and Within Qualitative Strategies of Inquiry When Designing Your Research How Structured Will Your Qualitative Interviews Be and Why? Choices About Structure Are Choices About the Degree of Control You Have Over the Interview Using the Same Reflexive Thinking When Collecting Data Using Other Qualitative Methods Will You Interview Your Participants Individually or in Some Form of Group and Why? Focus Groups—A Specific Type of Interview Which to Choose? What Will You Ask Your Participants in the Interview and Why? Developing Lines of Inquiry How Many Lines of Inquiry and Associated Questions Are Ideal for an Interview Guide? Designing Good Interview Questions Ask One Question at a Time Avoid Asking Dichotomous and Therefore Redundant or Limiting Questions Don’t Ask Leading Questions or Make Leading Comments, When Interviewing Try Out Your Draft Lines of Inquiry and Questions Before You Do Your Interviews Applying the Same Type of Thinking to Other Types of Qualitative Methods Who Will You Interview and Why? Choosing Between Different Types of Purposeful Sampling Plans in Your Study Design Putting Purposeful Sampling Into Practice When Designing Your Research 123 123 125 126 127 129 130 130 132 132 134 135 135 136 136 136 137 140 140 142
Contents  xiii Conclusions 146 Summary of Key Points 146 Key Research-Related Terms Introduced in This Chapter 147 Supplemental Activities 147 Further Readings 148 Notes 148 Chapter 7 Analyzing and Interpreting Qualitative Data 151 Purposes and Goals of the Chapter 151 Analysis of Qualitative Data: An Iterative and Dynamic Strategy 152 When Does Analysis “Begin” When Designing and Conducting Qualitative Research? 153 Using Memos to Capture Your Analytic Thinking and Hunches Why You Should Not Wait to Begin Analyzing Your Qualitative Data Until All Your Data Is Collected Developing an Iterative Qualitatively Driven Analytic Strategy Strategies for Organizing the Data You Collect and Keeping Track of Your Analytical Thinking About That Data Strategies for Deciding What Parts of the Data You Have Collected Are Relevant for Addressing Your Research Problem The Process of Data Condensation 154 156 157 157 159 159 Coding—A Strategy to Condense Your Data 161 More Choices and Decisions to Make When Putting Coding Into Practice 163 Methodological Choices About Whether to Employ an Inductive or Deductive Approach to Your Coding Choosing a Coding Strategy Congruent With the Theoretical Pillars of Your Design Coding in Grounded Theory Rounding Off Our Discussion of Coding The Art of Interpretation Ways of Establishing the Credibility of the Interpretations You Make and Therefore the Rigor and Trustworthiness of Your Research Collecting and Analyzing Data—When Do You Know That You Are “Done”? Connecting Analytical Considerations to Decisions About Sample Size Principles to Guide Sample Size Considerations in Your Qualitative Research Design 163 165 166 167 168 170 171 172 173 Conclusions 175 Summary of Key Points 176 Key Research-Related Terms Introduced in This Chapter 177 Supplemental Activities 178 Further Readings 178 Notes 179
xiv  Research Design Chapter 8 Foundational Design Issues When Using Quantitative Methods 181 Purposes and Goals of the Chapter 181 What You Need to Think About in Order to Design Credible Quantitative Research 182 Key Questions to Ask Yourself When Designing Quantitative Research Why You Need to Ask Yourself All These Key Questions Simultaneously Where to Begin? Deciding Who You Will Collect Numerical Data From and Why What You Will Need to Think About When Using a Sample in Your Research Design How Do I Design a Sampling Strategy That Enables a Representative Sample? Are There Other Sampling Strategies I Can Consider if Probability Sampling Is Not Feasible? Rounding Off: Important Things to Keep in Mind if You Decide to Use a Sample in Your Research Design Choosing an Analysis Procedure Suitable for Answering Your Research Question Research Questions About What Is Going On in a Study Population Descriptive Procedures Correlational Procedures Research Questions About Why Something Happens in a Study Population When the Research Question Takes the Form of a Hypothesis Aspects That Affect Whether or Not the Conclusions From Testing a Hypothesis Are Statistically Reasonable Still More Thinking to Do What Types of Data Are There? Nominal Data Ordinal Data Interval/Ratio Data How Different Types of Data Enable Different Types of Knowledge Why Making Sure You Collect Data of the “Right” Type Is Not Enough to Ensure That Your Research Design Is Statistically Reasonable Conclusions 184 184 185 187 188 189 190 192 193 193 194 195 197 197 198 199 199 200 201 203 205 205 Summary of Key Points 207 Key Research-Related Terms Introduced in This Chapter 208 Supplemental Activities 208 Further Readings 209 Notes 209 Chapter 9 Collecting Data Using Quantitative Methods 211 Purposes and Goals of the Chapter 211 Measuring Variables to Enable Valid Research Findings 212 Are You Measuring What You Think You Are? Making Sure the Measurements You Make Measure What They Claim to Example 1 Example 2 What Can We Conclude From These Examples? 213 214 215 215 216
Contents  xv Taking a Closer Look at What Enables Variables to Be Measured 216 How to Make Abstract Variables Measurable 218 Step 1: Clarify Your Understandings (i.e., Develop the Constructs) of the Variables Step 2: Identify Quantifiable Factors That Will Enable Measuring the Constructs Developing Measurement Items to Actually Measure the Variables Developing a Measurement Instrument Developing Measurement Items That Measure What You Intend Them to Measure Do Your Measurement Items Demand Too Much From the Respondents? Will the Measurement Items Enable Consistent Measurements of the Quantifiable Aspects They Are Intended to Measure? Is the Form a Measurement Item Takes in Keeping With What That Item Is Supposed to Measure? Takeaways From This Section Putting a Measurement Instrument Into Practice Response Rates and How They Relate to Sampling and Validity Ways of Collecting Data When Doing Survey Research, and Why They Matter Scenario 1 Scenario 2 Scenario 3 Take-Home Messages From These Scenarios 218 219 220 222 222 224 226 227 229 229 230 231 232 233 233 234 Conclusions 235 Summary of Key Points 237 Key Research-Related Terms Introduced in This Chapter 239 Supplemental Activities 239 Further Readings 240 Notes 240 Chapter 10 Designing Research Using Mixed Methods 241 Purposes and Goals of the Chapter 241 What Is a Mixed Methods Research Approach? 242 Mixed Methods Research as Combining Qualitative and Quantitative Research Approaches Snapshot 1: Mixed Methods as a Method That Combines Qualitative and Quantitative Approaches Snapshot 2: Mixed Methods as a Way of Thinking That Combines Aspects of Qualitative and Quantitative Thought A Different Definition and View of What Mixed Methods Research Is Snapshot 3: Mixed Methods As a Single Study, but Where One of the Methods Is Incomplete and Cannot Stand Alone What Can We Learn From These Snapshots About What Mixed Methods Is? Why Use a Mixed Methods Research Design? The Importance of Thinking About Why You Might Use a Mixed Methods Approach Priority and Timing of the Components in a Mixed Methods Study Thinking About Matters Related to Priority or Weighting of Components Thinking Through, and Deciding About, Matters Related to the Timing of the Components 243 243 244 244 244 246 247 248 249 249 250
xvi  Research Design An Example of How to Connect Purpose, Priority, and Timing and Why This Matters Option One: A QUAL → Quan Design Option Two: A QUAN → Qual Design What Can We Learn From This Example? Mixing—A Central Consideration in Mixed Methods Research “Mixing” as a Concept Not Just a Procedure Incorporating Different Levels of Focus Into Your Thinking About Mixing What About Paradigmatic-Related Considerations When Mixing Methods? Summing Up Our Discussion of Mixing Strategies for Navigating the Complex and Contested Field of Mixed Methods Research Strategy One: Use Diagramming (Not Just Diagrams) as a Way of Keeping Track of the Decisions You Make About Your Emerging Mixed Methods Design. Strategy Two: Keep Folding Back Reflexively on Your Own Thinking, and the Decisions You Are Making or Have Made, When Designing and Reporting Your Mixed Methods Research. Strategy Three: Don’t Go It Alone. Get Help and Find Support Along the Way. 253 254 255 256 257 258 258 259 261 262 262 263 265 Conclusions 265 Summary of Key Points 266 Key Research-Related Terms Introduced in This Chapter 267 Supplemental Activities 268 Further Readings 269 Notes 269 Chapter 11 Why Knowing and Declaring Your Research Design Hand Matters 271 Purpose and Goals of the Chapter 271 Knowing and Declaring What Your Research Design Related Hand Is 273 Pulling Up a Chair at the Research Design Table Putting All of Your Cards on the Table and Declaring Your Hand: An Important Part of Research Design 274 276 Declaring Your Hand: Missing in Action in Much of the Reporting of Research 276 What Happened Along the Way? 277 How to Declare Your Research Hand: Circling Back to Tell the Story of the Designing of Your Research 279 How to Reflexively Circle Back to Tell the Story of Your Project When Reporting on Your Research 281 Capturing All This in a Diagram of Some Sort Developing, and Then Diagramming, an Overview of the Process of Research Design The Importance of If . . . Then Thinking 282 283 284 Conclusions 285 Summary of Key Points 286 Key Research-Related Terms Introduced in This Chapter 288
Contents  xvii Supplemental Activities 288 Further Readings 288 Notes 288 Glossary 289 References 299 Index 313

PREFACE WHY WRITE A BOOK ABOUT RESEARCH DESIGN? AND EVEN MORE IMPORTANT, WHY SHOULD YOU READ IT? Why did we write this book? D esigning research is about making decisions to transform a research idea into a research plan able to provide answers about some sort of research problem or question. Thinking about, and then making these decisions results in what we call our research design—the plan that we will follow to put our research into practice and address our research problem or questions. Making these decisions begins the moment that we begin to think about a topic that we want to know more about. For example, What specifically do we want to know about this topic and why? What contribution is the research that we are designing intended to make to the development of knowledge related to this topic? How can we obtain that type of knowledge in a way that is credible? When we think through and address these types of questions, we are designing research. A few years ago, we began developing a reading list for a course we were going to teach about research design. We had assumed, given the explosion in research related textbooks and articles published in the past decade, that our problem would be which texts to choose given that there would be so many to choose from. Well, we were wrong. What we found on closer examination was, that in fact, there were very few books about research design itself. If the thinking sitting in and behind designing research was overtly discussed at all, it was usually done so in relation to an aspect of research design that the textbook was focused on—not research design as a whole. For example, some books discussed research design in relation to a specific type of method, or how to write a specific part of that design such as a literature review. Others discussed research design in relation to qualitative or quantitative approaches to research more generally. The result of this was that research design was usually discussed as part of a book that was focused on something else such as a specific method or an approach to research. It seemed to us that the focus on designing research was getting lost in all this, and becoming secondary to a focus on matters of procedures and techniques associated with the research methods that formed part of that design. While methods and procedures for putting those methods into place are part of what research design is, in themselves, they are not what designing research is all about. Research design is about the layers of interconnected thinking and decisions that gave rise to that choice of method or procedure to be part of the overall research design. This prompted us to take a closer look at some of the syllabi for courses about research design—what was being taught about research design and how? We discovered the same thing as we had when looking for research design related textbooks. There was much more focus on methods and how to design them in these syllabi and course outlines, and much less focus on the thinking needing to be done before deciding on a particular method or
xx  Research Design approach to be a part of an overall research design. In effect, research design itself had once again been relegated to a secondary focus and discussed in relation to something else— the procedures that make up those methods. We wondered if we had missed something in all this, or maybe even just got it wrong. However, looking into this further, we found that what we were noticing, others were commenting on too. For example, Denzin and Giardina (2016) observed that students learning about research and research design are more often than not taught particular “methods of data collection” (such as interviews, case studies, focus groups, ethnography, other basic research design techniques, etc.) within the context of research methods or research design courses; it is few and far between that philosophy of science and philosophy of inquiry seminars are required of graduate students—and even fewer still, we would contend, that call into question or contest the very notion of data or evidence itself. (p. 6) This trend troubled us. It was this that provided the initial trigger for us to start thinking about writing this book. Writing a book about research design, rather than methods of data collection was the goal. When doing so we wanted to develop and explore the idea of developing a research design where the emphasis was put on the process (the designing), not just the product (the design). To achieve this, we wanted to overtly return thinking to where it rightfully belongs in the process of designing research. This is at the center of that process to which, and from which, all paths and decisions lead and intersect as our research design emerges. This is because the form that a research design comes to take affects every part of the process of designing research—from the questions that are, or even can be asked, through to the way that data and the ways of obtaining that data are thought about. We agree with Kuntz (2015) that to be methodologically responsible, and for our purposes here, being responsible when designing research, cannot simply be “reduced to maintaining the integrity of research procedure . . . which method to use, where to use it, and how to interpret the data it produces” (p. 11). Rather it is thinking through, and justifying, the way that a research design became the way that it is, that is important for maintaining the integrity of the research design, the data produced as a result of putting that design into practice, and the interpretations made of that data. This is a thinking that requires us to ask a lot of questions—both about all areas of the research design, and the assumptions that we bring with us when we are designing our research. As Swaminathan and Mulvihill (2017) note “[a]sking good questions is fundamental to the heart of research, critical thinking, creative thinking, and problem solving” (p. 1). It is also fundamental when designing research. Therefore, there is much intellectual work to do, and many decisions to make, when thinking about your research design. This book is about this intellectual work and those decisions. For as Becker (1998) reminds us, [E]very subject we study [for our purposes here that includes research design] has already been studied by lots of people with lots of ideas of their own . . . about what it’s about and what the objects and events in it mean. These experts by profession or group membership usually have an uninspected and unchallenged monopoly of ideas on “their” subject. Newcomers to the study of the subject, whatever it is [for our purposes read research design] can be easily seduced into adopting those conventional ideas as the uninspected premises of their research. . . . [W]e need ways
Preface  xxi of expanding the reach of our thinking, of seeing what else we could be thinking and asking, of increasing the ability of our ideas to deal with the diversity of what goes on in the world [or for our purposes, in and when designing research]. (p. 7) Making a contribution to expanding the reach of our thinking and seeing what else we could be thinking and asking about research design, and of ourselves when designing that research, is why we wrote this book. Why do we think that you should read this book? So, keeping in mind the above discussion about why we wrote the book, why do we think you should read this book? First, the book is written in such a way as to open up, rather than shut down, discussions of research design. It introduces and develops the idea of a research design process in its own right, but at the same time acts as a resource for indicating what the next steps are in obtaining more information about each part of that process. Put another way, it gives you enough knowledge to know what you do not know about the process of designing research, and therefore what you need to find out more about, and how you might do that. Second, throughout the book we engage in a dialogue with you the reader to provide a serious but accessible introduction to research design able to be used by you as a guide when designing your own research and/or reading and making judgments about reports of other’s research. The style of this dialogue is in the form of a conversation about what we, and others, have learned as researchers and teachers of research that we wish someone had told us about before we began reading about, and attempting to design our research. Third, in this dialogue we actively encourage you to think with, and through, what is written in the book, and then bounce off that thinking to do even more thinking. These recurring cycles of thinking continue throughout the reading of this book, just as they do throughout the “lifecycle of the research [design] process” (Swaminathan & Mulvihill, 2017, p. 2). They will require you to focus on, think through, and make informed decisions about a range of interrelated and interconnected theoretical, methodological, and ethical considerations that shape what we term, or know as, a research design. These recurring cycles of thinking, and the understandings that this enables, is an important part of ethical and responsible research and research design. This is so no matter what research approach we are using. It is not an optional extra. Fourth, we do not pretend that such thinking will be a straightforward or easy process. It will be challenging and messy at times. And, as Becker (1998) notes, “[I]t's more work than if you did things in a routine way that didn’t make you think at all” ( p. 7). However, the payoff is a well thought through research design able to be defended, and consequently research that is able to be trusted. Therefore, maybe you should read the book because of what it is not—an “easily digestible” (Koro-Ljungberg, 2016, p. 6) oversimplification of the complex process of designing research. Fifth, it is not often that we get access to accounts and reflections of how this complex process was navigated as it is often hidden behind what Morse (2008, p. 1311) calls “the elegance of the end product”—in this case the research design that is produced. Therefore, we have included in the book accounts of examples of how this complexity was navigated by researchers and students when they were thinking about, and through, their research design. What did they do and why? These reflexive accounts provide you with unique insights into the “real story” of designing research and are a powerful way of learning from the hard-earned wisdom of others.
xxii  Research Design WHO IS THIS BOOK FOR? This book is for anyone with an interest in better understanding the process of designing research. The discussion in the book will assist you to think through questions such as, What needs to be thought about, and through, when designing research? When and why? What are the effects of that thinking on the way that research is designed and a research design produced? What do you need to think about when reading about the way research was designed in order to make decisions about if you trust that design? The discussion is grounded in our experience of teaching research to undergraduate and postgraduate students, and also of working with people who are expert in a variety of practice fields who want to know more about research. The material is presented in such a way that it can be overtly linked to readers’ development of their own research designs for their research projects. Readers not actually engaged in conducting research can use the material to test the thinking, and expose the assumptions, in research reports that they may be presented with, or are using, in their practice areas. Therefore, the book will appeal to, and be useful for, a wide range of readers. Students attending courses in research (e.g., research design, as well as specific methods) and also students who are actually doing research (especially for the first time) will find the book a useful companion, and at times sparring partner, as they are thinking about and designing their research. The book will also interest those teachers of courses about research design or research methods who are looking for a text that challenges, opens up, and extends students’ thinking about research design rather than limiting it to a focus on research design in relation to specific research methods. Another group of readers often overlooked when thinking about who might find a book like this useful are practitioners and managers who want to use research to provide evidence for their practice or the practices that they manage. This might be either by doing that research themselves, or by being able to make informed decisions about the trustworthiness of research that has been reported. For those managers commissioning research, the book provides an overview of the research design matters that are important to consider when writing the brief for the research, evaluating the progress of the research, and using the results of the research. HOW THE BOOK WORKS The book focuses on the thinking that underpins and shapes the development of a research design. This thinking affects the choices and decisions we make when designing our research, as well as the way we write about, and represent, the research design produced as a result of that thinking and those choices. Thus the “red thread” that holds the chapters together is the idea of designing research as a process in which there is constant thinking through, and revisiting of, decisions about that design as it is developed. There are many layers of thinking that sit behind a description, diagram, or representation of what we call a research design. Capturing the interconnected and layered thinking that sits around, and in, any research design led to quite a few challenges when we were writing this book. Chief among them was how to capture a complex, nonlinear, and dynamic process of thinking in a static book that by necessity presents material from beginning to end. We do this by overtly linking all the chapters in the book. Hence, points introduced in one chapter pick up on
Preface  xxiii points from earlier chapters, or are picked up on and developed in the chapters that follow and then returned to again. We use tip boxes and sometimes endnotes to indicate where in the book those discussions occur to guide those readers who want to explore how points made in the chapter they are reading are connected to discussions in other chapters. They then can return to rejoin the discussion in the chapter they were reading originally. In this way, we actively encourage you, our readers, to use the chapters in this book, and the thinking in them, “when it looks like they might move your work along—at the beginning, in the middle, or toward the end of your research” (Becker, 1998, p. 9). Consequently, just as thinking about research design cannot be reduced to a linear process, neither can reading this book. When thinking about and writing this book and the individual chapters in it, we never intended that the book or the chapters must, or even could, be read in a linear form in a single direction. Reading and interacting with this book involves moving backward and forward in the thinking that is at the heart of the research design process. The Structure of the Book Each chapter begins with an outline of the purposes and goals of the chapter. Here we present the issues to be discussed and thought through in the chapter. The chapter takes the form of an extended discussion of those goals. At the end of each chapter, the key points from that chapter are summarized, thereby providing a way for readers to do a self-check of how they went when reading the chapter. Throughout the chapters, we use the device of different types of boxes (Tip, Activity, and Putting It Into Practice boxes) to emphasize specific points made in the text, to encourage you to think about the practical implications of what we have discussed, and to provide tips about where to find more in depth or detailed discussions of such points. To consolidate the discussion in the chapter, after the summary of the key points in each chapter, we have included several activities for you to try. These activities are designed to reinforce central points and ideas raised in the chapter. They focus on putting these ideas into practice when you are actually designing your research. This is because it is one thing to have a discussion about central points related to research design, but quite another to actually put that discussion into practice! These activities can be done individually or with others such as in a class or tutorial group. Each chapter then concludes with a list of some suggested further readings related to the content in the chapter. To build your research design related vocabulary, for each chapter we provide a list of any new key research design related terms introduced for the first time in the book in that chapter. While we will have discussed and defined these terms throughout the discussion in that chapter (and placed them in bold where first introduced), a concise working definition of each of these terms can be found in the Glossary at the end of the book. This Glossary will assist readers who may need to refresh their memory about terms first introduced in earlier chapters. Recommended Ways of Reading the Book Without appearing to suggest the obvious, we recommend that you enter our conversation about research design by reading Chapter 1, in which we take a closer look at the idea of designing research and what that actually means and involves. Chapter 1 is a foundational
xxiv  Research Design chapter as it introduces you to many of the terms you will come across, and concepts that you will need to think about, when designing your research. Focusing on designing research as a nonlinear process involving recurring cycles of thinking about each aspect of your research design, the chapter provides an overview of, and introduction to, the areas that together make up your research design. Areas such as methodological, theoretical, and ethical considerations. We also look at the role that relevant literature plays in the development of your research design. The chapter calls for reflexivity (we take a close look at this idea—what it is and why it matters) on the part of the researcher to acknowledge and make transparent the theoretical and methodological assumptions shaping their research design. We argue that reflexivity and ethical considerations are central to designing responsible research. The rest of the book unpacks and develops the points raised in Chapter 1. The following points are designed to give you a taste of what is to come: • In Chapter 2, we pick up on the point of ethical thinking permeating the entire research design process. The chapter focuses on reflexively thinking about, with, and through ethical considerations and what they mean for the way that you will design your research. Allowing ethical thinking to frame your research project in this way means acknowledging that ethical thinking in research involves much more than getting ethics approval from an ethics committee. • Chapter 3 is about the process of focusing, and refocusing, your thinking to work out what actually the problem area or issue that your research is being designed to address is, and what the questions are that you want to ask about that problem or issue and why. We look at how this thinking, and the decisions that you make as a result of it, affects what form your research questions take, and the possible answers that they can give. The chapter includes an invited contribution from Maxi Miciak and Christine Daum which provides a unique reflexive account of how these researchers used cycles of thinking to work out what it was that they really wanted to focus their doctoral research on and why. • In Chapter 4, the focus is on the connections between method, methodology, epistemology, and ontology, and how these connections affect the way we design our research. We focus on how any specific research method is shaped by methodological assumptions and understandings, which in turn are shaped by onto-epistemological assumptions. Thus, methods are not an isolated set of rules or neutral procedures for data collection. The role they play in, and the effect they have on shaping, a research design is the result of the thinking and understandings that gave rise to those methods in the first place. • Chapter 5 picks up on the connections between method, methodology, epistemology, and ontology discussed in Chapter 4 to explore the effects of these connections on the way that we think about data, evidence, and what are known as qualitative and quantitative approaches and data when designing our research. • Chapters 6 through 10 are a series of chapters in which we discuss methods as part of the interconnected web of decisions that make up a research design. In these chapters, we demonstrate that methods are far more than procedures. They are part of a research design that provides both the rationale and context for their use. Therefore, detailed discussions of specific individual methods are
Preface  xxv not our focus in this suite of chapters. Rather, we concentrate on the thinking you need to do with and about methods when designing research. This is the thinking that underpins the choices you make about which methods to use, and why you decided to use them. Consequently, individual methods are discussed as exemplars of this thinking rather than as methods per se. Chapters 6 and 7 focus on qualitative approaches to research and what you will need to think about when using them as part of your research design. Chapters 8 and 9 look at what you will need to think about and why when using quantitative approaches to research. In Chapter 10, we build on our discussions in Chapters 6 through 9 and look at what you will need to think about when designing research using mixed methods—an approach that combines more than one method in some way in your research design. We have used all of the above approaches in our own empirical work. In Chapters 6 through 10 we draw on this experience to share insights that we have learned, sometimes the hard way, about thinking about, and then putting these approaches into practice. • Chapter 11 rounds off the book by returning in many ways to where we began— thinking about and exposing the assumptions that all of us are making when we design our research. Focusing on the idea of declaring your hand about the thinking that underpins your research design, the chapter pulls together much of what we have discussed in earlier chapters. It uses many examples when doing so. Therefore, rather than a sign-off statement or reiteration of what has been covered, the discussion is more of an inspirational one designed to encourage you to pursue your interest in research design, keep thinking reflexively about designing research, and continue asking questions about, and of, any research design—including the one that you might be developing. Coda: Coming clean—Why we were a bit annoyed when we began writing In the interests of being completely honest about why we wrote this book, and in the way that we have, we think that we should “come clean” and declare that this was also because we were a bit annoyed. We were annoyed that research design is often viewed as a subject that is dry, boring, tedious, and to be endured. Further, being asked to teach a course in research design is often considered as drawing the short straw when teaching allocations are made. We don’t agree! Therefore, a large part of the impetus for writing this book is our belief as educators about research, and active practitioners of what we teach, that research design and research can be taught and discussed in ways that are relevant, interesting, and exciting. It need not, and should not be, impenetrable, boring, tedious, or dry. Neither should textbooks about it. We hope that our book length conversation about research design will excite, encourage, maybe at times even provoke you, but never bore you, as you think through, and with, the material that we present. Instructor Resources An instructor resource site at http://edge.sagepub.com/cheek1e is available to support this book. It includes editable PowerPoint slides, and suggested essay questions to accompany the book.

ACKNOWLEDGMENTS T his book would not have been possible without the support of our respective families who encouraged us, put up with us working long days and over weekends on the book, and who never once doubted that we would complete it—even if we sometimes did! Nor would it have been possible without the support of friends who listened to our struggles, encouraged us over cups of coffee in person and over Skype while reminding us that we could do this. We also would like to acknowledge the support for, and interest in, this project from colleagues and managers at Østfold University College where both of us were working when most of the book was written. This support was an important part of making the book possible. The expert and honest feedback of Helen Salmon, our acquisitions editor, has been a major factor in assisting us to write the book we wanted to and in a way that speaks to the range of readers we wanted it to. We also appreciated the assistance given to us along the way by various people at SAGE —in particular that given by Audra Bacon and Olivia Weber-Stenis as the book neared completion. Finally, the book would not be what it is without the lively discussions and feedback from all the students, fellow researchers, mentors, and educators that we have had the privilege to work with, and learn from, over many decades. They have taught us much about what really matters when thinking about research design and it is from their questions about, and at times struggles with, designing research that the idea for this book emerged. Thank you all. SAGE and the authors are also grateful for feedback from the following reviewers during the development of this text: Natalie D. Baker, Sam Houston State University Hugh G. Clark, Florida Gulf Coast University David R. Dunaetz, Azusa Pacific University Sarah Fineran, Des Moines Area Community College Jaimee L. Hartenstein, University of Central Missouri Lauren Hays, University of Central Missouri Scott Liebertz, University of South Alabama Sherill Morris-Francis, Mississippi Valley State University Howard J. Moskowitz, Capella University Sarah Raskin, Virginia Commonwealth University Isla A. Schuchs Carr, Texas A&M University–Corpus Christi Kyle J.A. Small, Anderson University Stephen E. Sussman, Barry University Anne Whitesell, Ohio Northern University

ABOUT THE AUTHORS Julianne Cheek is a professor at Østfold University College, Norway. Her publications reflect her ongoing interest in qualitative inquiry and the politics of that inquiry. From 2010 to 2012, she had the honor of serving as the vice president of the International Association of Qualitative Inquiry and currently serves on the External Advisory Board of the International Congress of Qualitative Inquiry, held annually at the University of Illinois. She has had the privilege of serving in the capacity of editor in chief of the wellrespected journal Qualitative Health Research, as well as editorial board member and international advisory member of a number of core journals and significant Handbooks related to qualitative inquiry. She has a long-term interest in the teaching and development of courses in research design and methods at master’s, doctoral, and postdoctoral levels. Julianne has formal qualifications in teaching and education and began her working life teaching science in secondary schools. Her PhD study, which was in social sciences, used qualitative research approaches. Her first academic post was teaching research methods and psycho-social science to nursing students. Since then she has taught research design and methods to students in a number of health-related areas including physiotherapy, nutrition, medicine and sport sciences, as well as to students in leadership and organizational studies. Throughout her career she has held senior administrative and development research related university posts, including Dean of Graduate Studies, Director of Early Career Development, and Dean Research. Elise Øby is an associate professor at Kristiania University College, Norway. She holds a PhD in mathematics and a master’s degree in organization and leadership. Her master’s study employed qualitative research. She has held academic positions in multiple universities and colleges in Norway. She has also worked in public administration and held a management position at a university college, part of which was to manage the allocation of resources for teaching and research. She started her academic career teaching mathematics and statistics across a variety of study programs including teacher education, finance, business administration, and engineering. Teaching mathematics and statistics in applied areas such as finance and business administration sparked her interest in the decisions involved when using mathematics and statistics as tools to learn something about complex problems. It also highlighted the importance of ensuring that researchers and students recognize that designing research that includes the use of such tools involves much more than simply choosing a statistical procedure for analyzing empirical data. The thinking that sits behind the choice of such tools in research projects was missing in many discussions about research design. Her current teaching includes courses in research design, quantitative methods, and qualitative methods. She has an interest in enabling students and researchers to design research in such a way that the findings from that research are credible, reliable, trustworthy, and defensible.
xxx  Research Design NOTE 1. From Stake (2010, p. 78).
1 RESEARCH DESIGN What You Need to Think About and Why PURPOSES AND GOALS OF THE CHAPTER The purpose of this chapter is to introduce the idea of research design: what it is, and what you will need to think about when developing your research design. The focus is on the thinking that affects the choices and decisions you make when designing your research. The chapter provides the overall conceptual framework for the book and introduces core areas that you will need to think about when you are designing your research. This includes considerations about literature, methods, methodology, theory, and ethics: what they are and what effect they have on the shape that your research design takes. We highlight how designing your research requires you to join existing conversations in relevant research literature related to the various areas of that design. Areas such as methodological, theoretical, and ethical considerations. We explore how the way that we navigate those conversations, what parts of them we join, and what parts of them we ignore affects the way we think when we make decisions about our developing research design. Throughout the chapter we emphasize the part that reflexivity plays in the thinking about, and development of, any type of research design. When doing so we highlight how thinking reflexively forces us to constantly think through all decisions about that design as it develops. The goals of the chapter are to • Establish what research design is. • Introduce the idea of research design as an iterative, nonlinear process. • Identify foundational decisions and considerations that make up a research design. • Consider how theoretical, methodological, and ethical decisions shape any research design. • Illustrate that research design is much more than simply selecting methods or techniques that will be used to collect data. • Highlight the importance of linking the purposes of the planned research to how that research will be designed and conducted. • Demonstrate the use of relevant research literature to assist in the development of the research design. 1
2  Research Design • Provide information about how to make decisions about the relative merit of using different types of literature when designing research. • Emphasize that designing research requires reflexivity on the part of the researcher. • Present, and explain, the conceptual framework for the book. INTRODUCTION: WHAT IS RESEARCH DESIGN? Put simply, research design refers to the process by which a research idea is developed into a research project or plan that can then be carried out by a researcher or research team. It results in “a logical plan for getting from here to there, where here may be defined as the initial set of questions to be answered, and there is some set of conclusions (answers) about these questions” (Yin, 2009, p. 26). Research design is not simply about research methods or procedures. While methods and procedures are one of the areas you will need to think about when designing research, there are many other areas that make up that design. These areas will need to be thought through as well. This includes theoretical, methodological, and ethical considerations, each of which is discussed in later parts of the chapter. When discussing these areas we highlight that any thinking that we do about any of them (e.g., ethics) will affect other areas of your research design (e.g., the methods you choose to use and how you use them). It is the thinking that we do, and the decisions that we make, about these areas that shapes and makes up what we term a research design—our plan for getting from here to there. TIP A DIAGRAM OF A SPECIFIC RESEARCH DESIGN ≠ RESEARCH DESIGN It is important not to confuse the diagram of a research design with what research design is. There is a difference between research design as a thinking-based process and “a” specific research design usually represented in the form of a diagram. The diagram of a specific research design is a summary or representation of the result of that thinking. Any diagram of a research design cannot be understood apart from the thinking that gave rise to it in the first place. DESIGNING RESEARCH IS AN ITERATIVE PROCESS Designing research is an iterative process. Put simply, an iterative process is “doing something again and again, usually to improve it.”1 It involves cycles of thinking where you begin with an idea, think it through, and then revisit the initial idea that you had, refine or change it in line with that thinking, and then think that change through and so on. This continues until you have landed on a research design that you believe will be able to get you from here (about to start your research) to there (completing that research in a credible, systematic, and well-thought-through way).
Chapter 1 • Research Design   3 Designing research iteratively involves cycles of visiting and revisiting, examining and reexamining, modifying and then modifying again, each area of your emerging research design. It is about thinking carefully about what we are proposing to do, and why. It will require us to think backward and forward through the various areas of the research design process as our thinking refines or modifies decisions and ideas we first had. For example, if we rethink and change in some way the methods we are proposing to use, then we will need to revisit the ethics related thinking we have done to see what changes we might have to make to that thinking in light of the methods related changes we have made. Iterative is not an easy concept to define concisely or precisely. Nor is it an easy concept to put into practice when designing your research. You might find it helpful to think of iterative research design as an active and “constant, continuous process of making and unmaking” what will eventually emerge as your research design (Jackson & Mazzei, 2012, p. 1). When making and unmaking your research design, you will continually ask yourself questions about the decisions you have made about the emerging design in order to modify or confirm those decisions. The goal of asking these questions is to improve and refine the emerging research design. “Asking good questions is fundamental to the heart of research, critical thinking, creative thinking, and problem solving” (Swaminathan & Mulvihill, 2017, p. 1) and occurs throughout the entire “lifecycle of the research process” (Swaminathan & Mulvihill, 2017, p. 2). In the box below we provide an example of putting this type of questioning and iterative thinking into practice. PUTTING IT INTO PRACTICE EXAMPLES OF PUTTING ITERATIVE THINKING INTO PRACTICE WHEN DESIGNING RESEARCH You decide that you will study the process of older people moving into nursing homes. You begin to think about this idea some more and realize that you will need to think about, and then make, quite a few more decisions, in order to be able to design your research study. For example, what exactly do you want to know about that process of moving into nursing homes? Costs (e.g., of providing care for these older people or costs they incur when moving)? Or the characteristics of the older people making that move (e.g., age, gender, ethnicity)? Or the effect of the move on the older person or their families? These are just a few of the foci your study might take depending on what you decide it is that you want to find out something about—your research questions. After thinking this through, you decide that you want to know more about how older people themselves experience the process of moving into a nursing home to live. This decision means that you will return to modify your initial decision that your study was about the process of older people moving into nursing homes, and adjust it to reflect what it actually is about that process that you have decided you are interested in, namely, how older people themselves experience that process. You then think more about this focus of how older people themselves experience the process. This leads you to decide that you need to think some more about what
4  Research Design older people you are interested in knowing more about and why. You decide that the group of older people you are interested in are those who move from hospital to nursing homes. This is because you are interested in an unplanned move as the result of some form of acute health crisis. This leads you to revise your research focus to how older people who move from hospital to nursing homes experience that focus. In this iterative process, there are cycles of visiting and revisiting, examining and reexamining, modifying and then modifying again your thinking and decisions about the focus of your study. Diagrammatically we can represent this process as in Figure 1.1 below. FIGURE 1.1 CYCLIC THINKING THROUGHOUT THE PROCESS OF DESIGNING RESEARCH Study the process of older people moving into nursing homes Study how older people themselves experience the process of moving into nursing homes to live Study how older people moving from hospital to nursing homes experience the process of moving into nursing homes to live What exactly is it about What older people are this process that you Making a Making a you interested in? want to know about? decision decision You will continue this cyclic thinking until you have landed on what the focus is that you want to take in your research You will then continue this process of cyclic thinking throughout the entire process of designing your research as you think through each of the decisions you make, and their effects on the way that your research will need to be designed. Research Design as a Messy, Complex, and Demanding Thought-Driven Process Our discussion of designing research as an iterative process has highlighted that research design is not about linear, discrete, step-by-step thinking. It is a much messier, complex, and demanding process than that. There are a series of interrelated decisions needing to be made. These decisions enable us to turn our research idea(s) into a well-thoughtthrough, and therefore designed, research study. In order to make thought-through decisions rather than thought-less ones, we need to think carefully about what we are doing and why at all points of undertaking our research. It is this thinking, and the iterative cascades of decisions resulting from that thinking, that research design, and designing research, is all about. TIP We take a more extended look at an example of the messy process of iteratively “making and unmaking” (Jackson & Mazzei, 2012, p. 1) what will eventually emerge as our research design in Chapter 3 where Maxi Miciak and Christine Daum discuss the way their research questions developed iteratively when they were designing their research, and why they developed in the way that they did.
Chapter 1 • Research Design   5 RESEARCH DESIGN: WORKING WITH THE LITERATURE A research design is not developed in isolation. When we begin thinking and writing about any aspect of a research design, we become part of a series of long conversations (Hesse-Biber & Leavy, 2006; Locke et al., 2014) others have had before us, and will continue to have after us, about designing research. For example, throughout the process of developing your research design you will need to be aware of, and take into account, what is already known substantively about the problem that your research is being designed to address. Similarly, when thinking about how you might do your research you will need to be aware of, and take into account, the methodological conversations about how research might be done as well as how the way that you are proposing to do your research relates to those conversations. In other words, you will need to join conversations in the body of knowledge that has been built up by the work of others, and which is relevant to the various areas that make up your research design. These conversations have been going on for many years among researchers. In these conversations, some voices may be louder than others, and some voices might be silenced and/or lost. There is not always agreement about the various areas of research design being discussed. This means that you will need to know enough about these conversations to make a decision, and justify that decision, about which parts of them you will use in your research design and which you will not. Where will you find these conversations? Most of them you will find in what is referred to as “the literature” related to the various areas of your research design. Using Relevant Literature When Designing Research To assist you in developing your research design you will draw on, and interact with, relevant literature throughout the entire process of designing your research. Relevant literature refers to theoretical writing and reports of empirical work “that have important implications for the design, conduct, or interpretation of the study, not simply those that deal with the topic, or in the defined field or substantive area, of the research” (Maxwell, 2006, p. 28). Reading, thinking about, and interacting with relevant literature enables us to join ongoing conversations between scholars and researchers about the different aspects of designing research that we will need to think about when designing our own research studies. When designing research, it is important to become aware of these conversations in the relevant literature, think about the ways the conversations have been had, and then decide and declare the position you will take in relation to those conversations. Taking a position involves considering what parts of the conversations you agree with, what parts you do not, what parts you will use, what parts you will not, as well as what parts of that conversation your work and thinking might add to. It also involves justifying the choices that you make. This will require you to add an “and why?” to each of these considerations. For example, we might be reading literature about empirical2 work in an area related to our initial thinking about, and framing of, what we think our research problem is. However, after reading that empirical work, we may realize that there are aspects of the problem that we need to think about differently, or read more about. This might involve reading other empirical studies that take a slightly different focus. Or, it might involve reading about different, or additional, theoretical concepts that can help reframe the
6  Research Design problem. Reading about new and different ways of thinking about the problem our research is being designed to address forces us to revisit our thinking about that research problem. This may lead us to modify our initial thinking about what the problem is in some way. Such an iterative process of interacting and thinking with literature occurs throughout the entire research design process, not just when we are thinking about our research problem. All parts of our research design require us to join a range of analytic conversations—about substantive issues, about other empirical work, about theoretical matters, about methodological and method related matters, and about the various interconnections between these different conversations. Our research design is the result of which conversations we have decided to join and those we have not, what decisions we have made as a result of joining them or not joining them, and what conversations we want our research to be part of when it is completed. PUTTING IT INTO PRACTICE USING RELEVANT LITERATURE THROUGHOUT THE RESEARCH DESIGN PROCESS Different types of literature (in terms of its focus) will be used for different purposes at different parts of the process of developing your research design. For example, when you are thinking about your research problem or area, you will work with literature relevant to your substantive problem area to find out what others have and have not done and how this might affect what you choose to focus on (or not focus on) in that area. You will also work with literature that theoretically is relevant to your research problem or area. For example, if you are interested in the theoretical idea of moral distress, you will read literature related to that theoretical concept and how it is, and might be, defined. You will then be able to use the result of your thinking about what that literature is about to inform other parts of your research design. For example, if you are going to measure moral distress in a group of workers using some sort of quantitative survey, then what you will actually measure will be influenced by what others have identified in the literature to be key aspects of moral distress. There is also a lot of methodological and methods related literature that you will need to think through, and which can help you when designing your research. No matter what research method(s) you use, you will need to read, and think through, literature related to those methods—ways they have been, and might be, used. You will use this literature to inform what you need to think about when collecting and analyzing data using your selected methods. You will also need to find out about the strengths and weaknesses of those methods so that you are able to make it clear what the way you have designed your research enables you to say (i.e., the type of conclusions you can make using those methods) and equally importantly what it does not. And of course, from minute one of thinking about your research, you will need to join conversations in the literature about research ethics both generally, and specifically related to the substantive focus of your research, and the way you put your methods of choice into practice.
Chapter 1 • Research Design   7 Figure 1.2 captures this process of using different literature at different parts of the process of developing your research design. FIGURE 1.2 ■ The Use of Different Literature at Different Parts of the Research Design Process Some sort of problem/hunch Literature Substantive literature Research area Literature on relevant theory/knowledge and more refined substantive literature Research problem/questions Methodology and methodsfocused literature Research methodology Ethical considerations Associated research methods to collect data to answer question(s) Analysis according to accepted chosen methodology’s principles Development of findings/discussion/conclusion in line with chosen methodology’s principles Specific methods literature Literature/knowledge about accepted conventions of analysis Literature to enable theoretical generalizability Working With the Literature Is Not the Same as Simply Reviewing It Working with literature when designing research is part of the entire research design process. It is not limited to the production of some sort of one-off, static, review of selected (the reasons for which are not always declared) literature. Instead, working with the literature is central to the iterative process that underpins the development of a research design. It enables the researcher to (a) understand the conversations already happening within and across relevant fields; (b) figure out how to add to these conversations; and (c) identify the best means of doing so theoretically and methodologically. (Ravitch & Riggan, 2017, p. 32) The importance of thinking with literature as a process, rather than as a one-off static product often called “the” literature review (Ravitch & Riggan, 2017), is picked up on, and explored in detail, throughout this book. For example, in Chapter 2, we join conversations in the literature about ethical considerations when designing research and what
8  Research Design implications those conversations might have both for how we design our research and also what we consider ethical matters to be. In Chapter 3, we focus on what we need to think about when developing our research problem or questions and the part empirical, methodological, and theoretical literature plays in that development. In Chapters 4 and 5, we join methodological related conversations in the literature that impact the way we think about data and what type(s) of knowledge our research is being designed to contribute to those conversations. In Chapters 6 through 10, we participate in, and make decisions about, conversations related to the way that data can be collected at the level of specific methods. PUTTING IT INTO PRACTICE WORK WITH THE LITERATURE, DON’T JUST REVIEW IT There are many textbooks written about “how to” review the literature. Often, they are more about the techniques of finding and summarizing literature, and less about the importance of thinking about that literature as part of an ongoing iterative research design process. While it may be useful to read about ways of searching for, finding, and summarizing journal articles or other relevant literature, it is important to remember that thinking about, and reviewing, the literature when designing research is not simply producing some sort of descriptive overview of what seems to be relevant literature. Rather, working with relevant literature to help inform your thinking about various aspects of your research design is an iterative process which will require constant reference to more, new, and different literature as the design, and the thinking that underpins and shapes that design, develops and unfolds. You will have a series of ongoing and different conversations with literature during an iterative research design process. As you engage in conversations with literature, we strongly suggest you keep track of what you read as you read it. For example, when you read an article, make a note about the key points from that article when you read it. At the same time, be careful to note the name of the article, the author, the journal, and the date of publication. Backtracking to figure out where you found something that you later want to check is almost impossible as the amount of literature with which you have your conversations increases rapidly as your design develops. You will need to know these details so that you can cite the author and article from which you obtained your ideas, or from which you used some text. A citation is a reference to somebody else’s work to acknowledge that the idea was not originally yours or to show that the idea that you are coming up with builds on that person’s work in the first place. This is an important part of responsible and ethical research design. Plagiarism is when we use other people’s work without fully acknowledging that the idea or the words came from that person(s) in the first place. In effect, we are taking those ideas and representing them as our own. HOW DO YOU MAKE DECISIONS ABOUT WHICH LITERATURE TO TRUST OR RELY ON? Research related literature is usually categorized in terms of where it has been published and what review process it has gone through. It is important to be aware of these categorizations as not all types of literature are afforded equal weight in terms of their scientific standing and trustworthiness. This difference in scientific standing and trustworthiness
Chapter 1 • Research Design   9 can affect perceptions about the trustworthiness of your research design if doubts are raised about the credibility of the source of the literature that you are using to base aspects of that design on. Journal Articles Reports of empirical or theoretical research are usually found in peer reviewed journals and are afforded high status and trustworthiness by most researchers and scholars. This is largely because of the expertise of the editorial board of those journals and the process of peer review that the journal undertakes. In this process of peer review, authors submit their manuscripts, reporting their research, to the journal editor to be reviewed and considered for publication. The manuscript (mostly with the author’s names and affiliations removed, which is called “blind” peer review) is then sent by the editor to at least two peer reviewers who are experts in the area of the research being reported. These expert reviewers then read, and make scientific judgments about, the quality of the manuscript. Such “blind” peer review is designed to make sure, as far as possible, that the focus of the review is influenced by what is said in the manuscript about the research design, the findings, and their significance. Publications in journals that employ this type of review are considered a reliable and credible form of scientific- and research-related literature because of this form of rigorous review by peers in the field. This type of literature is often called “scientific” literature. There are two major publication models for scientific journals. One is the traditional subscription model where the author does not pay fees to the publisher of a journal to cover the costs of the peer review process and, if accepted, publishing the article. Instead, these costs are recovered by the publisher of the journal by charging readers a fee for accessing the articles in the journal. This fee can be in the form of annual subscriptions to the journal, or it can be in the form of paying a fee to access individual full text articles over a set period of time such as 24 hours, after which time access is lost. The other model is what is known as Open Access publishing. The article still undergoes rigorous peer review and rejection rates for articles submitted to many Open Access journals are in line with those of traditional subscription model journals. However, in Open Access journals the costs of reviewing and publishing the article are paid by the author if, and when, the article is accepted. There are no costs for individuals wanting to read that article—hence the description of this model as Open Access. Access is open to everyone as there is no payment involved. Hence the reach and access of an article in a reputable Open Access journal may be greater than in a journal where not all readers have access to that article because, for example, institutions do not subscribe to that journal so staff and students will need to pay to read those articles. There are different types of Open Access possible. These differences are related to the degree of, and how, access is given to the article. The one we have described above is known as Gold open access. However, there is also what is called Green open access where although there is not completely open access to the article such as on the journal’s website, authors are able to post on their personal or institutional website a version of the article able to be accessed by readers. There is also an in-between model where journals that use a subscription-based publishing model will make an individual article openly available to everyone if the authors pay a fee to enable this when the article is published (see Richtig et al., 2018 for a good discussion of this).
10  Research Design Increasingly funding bodies are requiring those gaining funding for their research to enable some form of open access to any articles reporting on research from that funded project so that anyone can read that article, anywhere and without any time limit. This is because the goal of the funding is to enable the development and dissemination of the knowledge gained from the funded research as widely as possible and not depend on a reader having the resources to be able to pay for that access. Books and Book Chapters Book chapters and books that are published by what are often described as “good quality” or “reputable” national and international publishing houses are also given credibility and standing in terms of the hierarchy of research and scientific literature. Defining just exactly what a quality or reputable national or international publisher of scientific books and book chapters is, is not clear cut. There is no standard way to identify such publishers. However, to help you make decisions about this you will find that most universities and research institutions have developed their own lists (formal or informal) of whom they consider to be reputable publishers. In some countries, government bodies, drawing on input from researchers, have developed lists of publishers that publish books and book chapters that are recognized as credible and of good quality.3 Publishers deemed reputable have in place similar processes to the peer reviewed journal process. Draft chapters or draft books will be sent out for review by peers and published only if favorable reviews are received or revisions to the chapter or book have been made in line with reviewers’ recommendations. These publishers will cover basic costs associated with the production of the book or chapter, and not charge the author fees unless it is in relation to some sort of recognized Open Access publishing model. Other factors that can indicate that a publisher is reputable include who the authors are whose work is published by them and how the publisher distributes their books. Guidelines for publication that the publisher provides to authors can also provide a guide to the credibility of the submission and review processes of that publisher—what are they, how detailed are they, and are they used in practice? TIP BE AWARE OF WHAT ARE KNOWN AS PREDATORY JOURNALS AND PREDATORY PUBLISHERS These are journals and publishers that mimic the Open Access model. The goal of these journals is to take the author’s money for their own profit, rather than to ensure that any fees charged are used to make the knowledge gained from the author’s work available as widely as possible and not dependent on a reader having the resources to be able to pay for that access. Consequently, predatory journals will “sell” submitting a paper to the journal by promising authors a very short review time (often a few days) and advertising very high acceptance rates. They will also charge considerable fees at the time of submission. Such fees are usually nonrefundable either fully or in part, even if the paper is not accepted. Characterized by the use of widespread spamming, predatory publishers obtain lists of research groups or publications by researcher and then contact potential authors to ask them to submit their work to them—even if the author might not be working or publishing in the area of the journal’s focus.
Chapter 1 • Research Design   11 The result is that many or most papers published by many of these journals are often of poor quality and do not meet the standards set by reputable Open Access journals with expert editors, editorial boards, and reviewers enabling credible, thorough, and transparent peer review processes. You need to be aware of the existence of predatory journals so that you can make decisions about the credibility of an article you are reading in terms of where it is published and the editorial and review processes that article has been through. Can you trust what it being reported in the article? You also need to be alert when you are thinking about where you might publish your own research. How can you make sure that the journal you are thinking of submitting your work to is not a predatory one? One excellent resource to help you do this is the guide for what to look for when deciding if a journal is a legitimate one or a predatory one provided by Victoria Glasson (2017) in her post “6 Ways to Spot a Predatory Journal.” She gives the following advice for spotting predatory journals: 1. Always check the journal website thoroughly. 2. Check what professional publishing and/or editors’ organizations or bodies the journal is a member of. 3. Check the journal’s contact information. 4. Research the editorial board. 5. Check if the journal has a peer review process and publication timelines. 6. Read through past issues of the journal. (See Glasson, 2017.) If you would like to read more about any of the six pieces of advice above, we encourage you to go on the site and take a look at the additional advice Glasson offers in drop-down text attached to these points. 4 You will also find useful resources on most major mainstream and reputable publishers’ web pages about what to look out for, and think through, in relation to predatory journals. For example, on SAGE’s website, you will find Natalie Gerson’s (2019) very useful post “How to Protect Yourself From Predatory Publishers and Other Open Access FAQs.” Advice is also provided to authors on the Taylor and Francis website about making decisions about whether Open Access journals are of good quality.5 Another useful resource is the Journal of Human Lactation editorial statement and policy on the use of references from predatory publishers in articles submitted to that journal (JHL Editorial Team, 2020). In addition, another article in that journal, “Understanding Quality in Research: Avoiding Predatory Journals” by Strong (2019), is very helpful as well. Finally, the website https://thinkchecksubmit.org/ provides useful tools and resources to make sure you are submitting your research to a journal or publisher that can be trusted. Other Types of Literature That Might Be Useful if Used With Care There are also some forms of literature that have not undergone such a formal process or blinded peer review but which still can prove very useful in terms of providing ideas and context for aspects of a research design being developed. This does not necessarily mean that these non-peer-reviewed articles or chapters are not trustworthy or not able to be used by researchers. It does mean, though, that they have not undergone quite as rigorous review process as peer reviewed books, book chapters, and journal articles. Examples include articles in non-peerreviewed journals that may be in more profession- or practice-based journals, and books and collections of chapters self-published in-house by a researcher or group of researchers. Another type of literature that may be useful when designing research are reports of some sort. These can be, for example, government or technical reports, policy or reform
12  Research Design documents, or government regulations.6 This type of literature is useful in terms of providing contextual material for the study. At times parts of this literature even form part of the actual texts that the research is being designed to analyze, for example, if the research draws on some form of document or textual analysis as its theoretical and methodological inspiration. Questions arise about the trustworthiness of using online information, such as Wikipedia and blogs, general encyclopedias, reports in newspapers and popular scientific books, as part of the literature in a study. While there is no absolute or straightforward answer to how to make decisions about the trustworthiness of this information, it is often the case that the further away whatever is being reported or discussed is from data or findings of actual research studies, the less scientific the source can be considered. That said, we agree with Stake (2010) who noted over a decade ago that “Wikipedia is a valuable resource, in spite of the potential mischief of open editing. Wikipedia information begs to be checked, doubted, presented with caution” (p. 116). With respect to using newspaper articles or other forms of journalistic reporting, it is important to see if the article or report tells us what the information and conclusions are based on. Some articles in some newspapers do this. However, often we get sensational headlines such as “Being rich and successful really IS in your DNA: Being dealt the right genes determines whether you get on in life.”7 Yet, when we read that article there is very little there to trust, or convince us to trust that article. This is because there is very little reporting of any details of the research on which such claims are based. RESEARCH DESIGN: CONSIDERING METHODOLOGY AND METHODS Methodological related thinking shapes the form that the research design takes. Put simply, methodology refers to “the strategy, plan of action, process or design lying behind the choice and use of particular methods and linking the choice and use of methods to the desired outcomes” (Crotty, 1998, p. 3). Methodological considerations force us to think about if, and if so how, a particular method gives us the type of data to generate the type of knowledge that we need to address our research question(s). Therefore, “when we are examining methods, comparing them or thinking about the kinds of knowledge that they produce, then we are doing methodology” (Greener, 2011, p. 5). Methodological related questions we might ask ourselves when thinking about the design of our research include the following: What type(s) of knowledge or data will I need to address the research question(s) that the research is being designed to answer? Will the use of a particular method contribute this type of data—why or why not? In this way, we use methodological thinking to provide a rationale “for the choice of methods and the particular forms in which the methods are employed” (Crotty, 1998, p. 7) in our research design. When designing your research, you will need to read widely in order to join existing conversations about both methodology itself and the assumptions different methodologies make about research and how to do that research and the effect these assumptions have on the way that research is thought about and designed. For example, in what are termed “qualitative” approaches to research,8 the research will be designed in such a way as to enable the emergence of rich and qualitative
Chapter 1 • Research Design   13 interpretations of the perceptions or experiences of people about a specific aspect(s) of the everyday context(s) in which they exist. This type of approach is often referred to as naturalistic or interpretive inquiry. It aims for in-depth understandings of peoples’ perceptions and experiences of whatever is the focus of the study.9 Although procedures for the research may be identified beforehand, qualitative research designs are characterized by “built-in flexibility, to account for new and unexpected empirical materials and growing sophistication” (Denzin & Lincoln, 2005b, p. 376). On the other hand, in what are often termed “quantitative” approaches to research,10 one can assume that the researcher has a commitment to, and has identified the need for, some form of quantification of specific characteristics of a group of people or other objects of interest. This is because “[q]uantitative research works with statistics or numbers that allow researchers to quantify the world” (Stockemer, 2019, p. 8). To employ this sort of approach to inquiry you will need to follow set mathematical and statistically based procedures. This is so that appropriate forms of numerical data are produced that are able to be used to make valid probabilistic and statistically based interpretations about those specific characteristics.11 This type of research design12 requires and “places a premium on the . . . specification of the research strategies and methods of analysis that will be employed” (Denzin & Lincoln, 2005b, p. 376). Which approach you choose as part of your research design will be the result of your thinking about what type of knowledge you want your research to produce and how you can obtain that knowledge. Once you have decided on your methodological approach, you will then need to make decisions about what methods you will use as part of your research design. Methods Research methods refer to “the techniques or procedures used to gather and analyze data related to some research question or hypothesis” (Crotty, 1998, p. 3, italics added). Each research related method has specific procedures and techniques associated with it that are designed to obtain a particular type of knowledge or information—usually referred to as data.13 The methods chosen must be consistent with the type of knowledge or data we want to obtain from our research (our methodological considerations), which in turn must be consistent with the nature of the research problem that is shaping the entire research design. For example, if we want to know about individual people’s experience of losing their jobs, we will need to choose methods that enable us to get in-depth information about that experience from the point of view of each individual participant—possibly using some form of individual interview. If, however, we want to know how many people from different segments of a defined population group experience anger, sadness, or any of the other possible characteristics that we have identified as being of interest related to the experience of losing one’s job, then we will need to use different methods in our research. We will need methods that allow us to generate the type of numerical data we will need to answer questions related to how many people experience those characteristics of interest. This is why research methods are not, and should not be thought as, stand-alone techniques simply able to be selected and inserted into a research design. Decisions about methods are part of the overall research design process. Their choice must be closely related to the purpose of the research and consistent with all other parts of the design.
14  Research Design While designing research does involve thinking about how to use or employ particular methods, this occurs after having thought about why choose those methods in the first place—a methodological consideration. Methodological related thinking and decisions “guide a researcher in choosing methods and shape the use of the methods chosen” (Crotty, 1998, p. 3). Methodological thinking related to designing research is thus at a higher level of focus than is thinking about how to do specific methods. For example, in a research design where interviews of some sort are the method of choice, methodological related decisions will be about why interviews themselves are an appropriate method or way to collect the data in our study in terms of the type of knowledge that we will need to address our research questions. Is the choice of interviews, and a particular type of interview, as the method we will use for data collection consistent with the aims and desired outcomes of the research? You may see the terms methodology and method being used interchangeably in some of the writing about research design, and when research is reported. Or you may see methods as a required or standard heading in research reports and methodology missing altogether. This can lead to thinking of methods as stand-alone techniques, rather than methods as arising from, and unable to be understood apart from, methodological ways of thinking. Neither methods nor the data produced by them can be understood apart from the methodological considerations and choices in the overall research design in which they are embedded. If removed from these understandings, methods are reduced to being merely techniques—sets of procedural rules to follow. We develop this introductory discussion of methodology further in Chapters 4 and 5 of this book. In addition, all chapters in the book at some point put the spotlight on the effect of methodological related thinking on the aspect of research design being scrutinized in that chapter, such as ethical implications arising from a particular methodology and the methods associated with it (Chapter 2), the way research questions are formulated (Chapter 3), and the way methods are put into practice (Chapters 6, 7, 8, 9, 10). RESEARCH DESIGN: CONSIDERING THEORY Theoretical considerations and choices, like methodologically focused considerations and choices, provide orienting ideas that shape the form a research design takes. Assumptions about what theory is and is for, as well as ideas or concepts from specific theories, provide orienting ideas that influence all aspects of the research design. The theoretical framework of your research is “[t]he system of concepts, assumptions, expectations, beliefs and theories that supports and informs your research” (Maxwell, 2013, p. 39). This includes the questions that are asked, the way data is collected, the analysis of that data, the interpretation of that analysis, and consequently the conclusions that can be drawn from the research. Despite its wide use both in everyday and more academic contexts, it is difficult to come up with a precise definition of theory. Theory tends to be one of those words that everyone uses, but struggles to give a precise meaning to. This is because the long conversations about theory that researchers, philosophers, and others have been having for many hundreds of years have not resulted in agreement among them about “the” or
Chapter 1 • Research Design   15 “right” definition of theory. Thus, theory remains a term used, and understood, differently by researchers depending on how they position their thinking, and subsequently their research design, in relation to that conversation. In the physical and natural sciences, theory traditionally has been viewed as “a set of abstract (ideally mathematical) propositions, some of which take the form of ‘laws,’ that predict a range of specific events or results” (Maxwell & Mittapalli, 2008, p. 877). Adopting this view, or understanding, of theory affects all aspects of the way designing research is thought about, including methodological ones. If this view of theory is adopted, then • theory is understood as providing a set of propositions, or a priori concepts, that have been standardized or stabilized in some way, in order for them to be able to be tested to either support or disprove that theory. • the goal of both the research, and the theory that the research is designed to test in some way, is to enable generalization of what you learn from the data to specific population groups of interest. Such generalization will involve the use of mathematically derived fixed principles and procedures. These orienting ideas emerging guide the researcher’s thinking when developing their research design. However, this is not a view or understanding of theory, or research, held by all researchers. In much of the thinking in the social sciences, thinking about theory is not restricted to theory as providing a set of propositions, or a priori concepts, able to be tested to either support or disprove that theory. Instead, theory is thought of more as an enabler for the research. The role of theory in this view is to provide what one of the foundational thinkers in the area of social theory, Herbert Blumer (1954), called “sensitizing concepts” (p. 7) for the research. A sensitizing concept is not prescriptive but, but merely “suggest directions along which to look” (p. 7). If this view of theory is adopted, then it follows that the role of theory in research design is to • provide initial orienting concepts for the framing of the study rather than standardized or stabilized a priori concepts to be tested. • contribute to, and build, our understandings of the way things work or are understood and/or are experienced, in social contexts and settings. • enable exploration and building of aspects of theoretical concepts. Rather than focusing on only one theory, or one or several theoretical concepts and testing that theory or those concepts, the research is designed to add layers and richness to our understandings of that theory or those concepts themselves. The following box uses an example of how different views of theory affect research design using the example of studying motivation. It also demonstrates that what theoretical view the researchers adopted about motivation affected the methodology they used in their study.
16  Research Design PUTTING IT INTO PRACTICE HOW DIFFERENT VIEWS OF THEORY AFFECT RESEARCH DESIGN – THE EXAMPLE OF STUDYING MOTIVATION To demonstrate how theoretical understandings, and the orienting concepts derived from those theoretical understandings, affect the way that research is designed and put into practice, we will compare two different ways that research was designed to study the same broad substantive focus, namely motivation and school students. While both studies are about this same broad substantive area, they differ with respect to the orienting ideas that underpin the thinking in their respective research designs. The different methodologies used in each study reflect the different ways that theory is thought about in each study. Study One (Martin, 2001, 2003) focuses on specific theoretical concepts that have been identified in previous research as being related to motivation in schools. The study is designed to test how those concepts relate to motivation in schools and how they relate to each other (Martin, 2001, 2003). Study Two (Friels, 2016) is designed to build theory by adding layers and richness to existing theoretical understanding of both the idea of motivation itself and how motivation is perceived and experienced and “works or does not work” in a specific school context. Study One The Student Motivation Scale developed by Martin (2001, 2002, 2003) is underpinned by a wide range of theoretical contributions from theories about motivation. The scale is an instrument for measuring motivation, designed by separating motivation into factors reflecting enhanced motivation (“boosters”) and factors reflecting reduced motivation (“guzzlers”). The Student Motivation Scale therefore is able to not only focus on “the energy and drive of students,” but also on “their ability to deal with pressure and setback” (both quotes from Martin, 2002, p. 34). Understanding what are the boosters and what are the guzzlers of each individual student, both in the student’s life and in the classroom, requires a measurement instrument able to measure several aspects of motivation. The Student Motivation scale is able to do just that. Moreover, because “motivation is students’ energy and drive to learn, work hard, and achieve at school” (Martin, 2001, p. 1), such a multidimensional understanding of motivation has the potential to assist and aid educators “operating in contexts in which students require assistance to sustain motivational strengths and address areas of motivation that may be of some concern” (p. 20). In other words, by measuring a student’s motivation using the Student Motivation Scale, educators can acquire the knowledge they need to “keep high boosters high; keep low guzzlers low; increase low boosters; and reduce high guzzlers” (Martin, 2002, p. 42) for that specific student. In the 2003 study, Martin examined the Student Motivation Scale by collecting data from 2,561 high school students. The data were analyzed to test the proposed categorization of factors into guzzlers and boosters. The replies were analyzed according to statistical procedures with the intent to generalize beyond the sample of people surveyed in the study. The analysis showed that “the Student Motivation Scale is psychometrically sound and can be usefully implemented to determine groups of students at risk of disengagement, disinterest, and underachievement” (Martin, 2003, p. 88) Study Two On the other hand, Friels’s study (2016) involved interviewing four African American high school female students from low-income families using semistructured14 face-to-face interviews. The interviews were designed to “capture the stories of the
Chapter 1 • Research Design   17 students as they share their experiences” (p. 8) related to motivational factors, and their perceptions of the role various factors (such as the community they grew up in, peers, and family) play in their academic success or failure at school. Qualitative analysis of these interviews provided new insights into the students’ perceptions of what it was that motivated them and why. In this way, the research built more nuanced and contextual understandings of the idea of motivation itself that can be used by educators and policy makers when developing targeted programs to provide effective support to African American high school female students from low-income families. With the previous discussion in mind, useful questions to ask yourself when designing your research include the following: • What theoretical assumptions are you bringing with you to the table when you are designing your research? • How do these assumptions affect the way that you design your research? • Does this matter? Having this type of discussion with yourself when thinking and writing about your research is a central, but often overlooked, part of designing research. Such thinking will make you aware of why you made the decisions about your research design that you did— including exposing any assumptions you may be making about theory, research, and science when doing so. Exposing these assumptions will enable you to use theory well, and not be used, and therefore constrained, by it. It will help you avoid becoming bogged down in questions or assumptions about “the” “best” theory to use, or “the” “right way” to use theory. Answers to these types of questions are study specific and related to what it is that you want to know about and why you want to know about it. In other words, what type of knowledge do you want your research to provide in order to contribute better understandings of the problem that your research is designed to address. TIP REMEMBER: THEORETICAL CONCEPTS ARE COMPLEX Theoretical concepts and theories are complex. Therefore, you will need to know enough about these theories to be able to think through this complexity in order to make an informed decision about what understanding of a particular theory or concept you will put into practice in your research design. For example, if you are using a theoretical concept such as power in your research design, you will need to move beyond common sense or assumed understandings of what power is, or making simplistic statements such as “power is a guiding theoretical concept for the study.” This statement disguises the complexity of the idea of power which is a concept made up of a repertoire of diverse perspectives drawn from diverse theoretical positions (see Hindess, 1996).15
18  Research Design THE IMPORTANCE OF REFLEXIVE THINKING WHEN DESIGNING RESEARCH Thinking about, and developing, a research design iteratively is challenging, and at times confronting. We are forced to ask questions of our developing research design and expose and examine the assumptions we are making about that design. It is much easier to think of research design as a predetermined type of recipe, or a step-by-step diagram, to follow. This recipe or diagram can be selected from some sort of procedure manual or textbook about “how to do” research and followed step-by-step without really having to do too much thinking except how to do what the steps require. However, simply following a series of predetermined steps does not allow us the thinking space to consider the assumptions about research, existing knowledge, theory, and our role as researcher that we bring to both the design of our research and the way that we subsequently go about putting that research design into practice. The steps, like research designs, are not neutral, value and theory free. An instrumental focus and reducing thinking about research design to “how to do or follow the steps” ignores the assumptions about research, knowledge, and our role as researcher that are embedded in those steps. There is a lot of theory-in-use in the assumptions that any seemingly neutral or objective steps and procedures in that research design make—even if this is not declared or even acknowledged. Thinking about these assumptions and the effect that they have on the cascade of decisions that must be made when designing research requires reflexivity on the part of the researcher. What Does Reflexivity Mean? Defining reflexivity is not an easy task because, as Lumsden (2019) points out, “[T]here are numerous definitions and operationalizations of reflexivity” (p. 2).16 Put simply, reflexivity is a type of folding or bending back (Finlay & Gough, 2003) on our own thinking to work out why we have come to think about something in the way that we do. What assumptions do we make when we think in this way? Based on what? When designing your research, such folding or bending back takes the form of “critical self-reflection of the ways in which researchers’ social background, assumptions, positioning and behavior impact on the research process” (Finlay & Gough, 2003, p. ix). It helps you to understand “the significance of the knowledge, feelings, and values” (Attia & Edge, 2017, p. 35) you bring with you when designing your research, and how this affects all aspects of how you develop that design from the research questions you ask, to the methods you use and analytical lenses you employ (Attia & Edge, 2017). For example, are there methods that you believe to be “better” than others? What do you base this on? Can you justify this assumption? How did you come to have such an assumption in the first place? In this way, reflexivity challenges us to work out why we think what we do when designing our research, and whether there are other possible ways of thinking about that design. Hence reflexivity is a form of thinking about research design that is dynamic—not static or linear. It requires adding an “and why” to all the thinking we do about our design. This type of and why thinking forces us to expose, examine, and challenge our thinking and the choices that that thinking resulted in throughout the research design process. This includes choices made before we begin designing the research; while we are designing the research; when we are putting that design into practice; and even after we have
Chapter 1 • Research Design   19 completed the research design. For example, what we will report, or not report, about our research. Putting Reflexive Thinking Into Practice When Designing Research Between us we have many years of experience both doing research ourselves and acting as advisors for students’ research. We have noticed that the researchers or students who navigate the challenges of the process of designing research well are those who take the time to reflexively think through, ask questions of, and then declare the decisions that they have made related to the design of their studies. They are constantly asking themselves a series of interrelated questions about that emergent design. Questions such as these: What type of knowledge will I need to address the problem or questions I want to ask? What is an appropriate way to obtain that knowledge? Appropriate in what sense? Methodologically? Ethically? Feasibility wise ? They are also the researchers or students who ask themselves questions about the effect that the way they answer the above types of questions has on the overall research design. Questions such as these for example: • What happens to this part of the design if I make this decision and not another? • How might this affect decisions that I have already made about other parts of my research design? • Why am I thinking about these questions and adopting a stance in relation to them, in the way I am? Asking these types of questions of yourself requires you to fold back on your own thinking. In so doing, it enables you to reveal and understand your own research standpoints. It also enables you to recognize and think through the effects of those standpoints on the choices you have made in that design process. This is because reflexive thinking challenge[s] some of your ways of knowing. . . . You may need to unlearn . . . what you bring to the learning and to see your knowledge and experiences as foundations on which you will continue building. (Skukauskaite et al., 2018, p. 340) Put another way, reflexive thinking enables the design of our research to be understood not only in terms of what it is but how it became to be the way that it is, thereby providing justification for the way that it is. Thus, “[r]eflexivity can be a way to examine the complete research process and a vital procedure for enhancing validity” (Lahman, 2018, p. 35) of all types of research—a point we return to many times in this book. Activity Understanding Our Own Research Standpoints: An Example of Thinking Reflexively In a reflexive piece of writing, Sharlene Hesse-Biber thinks through and then declares how her background and assumptions, that is, her research stance, affected why and how she wrote her book Mixed Methods Research: Merging Theory With Practice (2010b).
20  Research Design We include some excerpts of this reflexive writing below, all of which are taken from page 25 of her book. I am a feminist qualitative researcher who has a particular perspective on social reality. As a feminist, I am interested in asking a set of research questions that often trouble the waters of traditional knowledge building by including issues of difference in the research process. I am interested in issues of power, authority, and control while conducting research as well as asking such questions as: What is studied? From whose perspective? Who is being studied? Who is left out and needs to be included in this study? I guess you might say that I am a methods interloper—an outsider and an insider to mixed methods research. As a sociologist who has had traditional training in quantitative methods and the positivist paradigm, I am an insider in that I practice and teach both methods and have in fact conducted several mixed methods projects. As a feminist, I am often the outsider who asks new questions, yet I will utilize a range of tools—quantitative and qualitative—as needed to answer my questions. I am not wedded to one specific method or set of methods. I use whatever methods will facilitate getting answers to my research problem(s). As a researcher, my agenda is one of promoting a comprehensive approach and understanding of the use of methods techniques by placing the practice of methods more firmly within a research context. I am cognizant of the importance of living within the contradictions and tensions of the research process. I enter into dialogue with this process. To dialogue means confronting our assumptions, suspending judgment, and embracing difference. To dialogue also means to hone our listening skills, with a stance toward understanding. SHARLENE HESSE-BIBER These excerpts provide an excellent example of reflexive thinking in action. In this writing, Hesse-Biber is aware of how her background and interests might affect the way she thinks about, writes about, and conducts her research. Now think about your own research stance. How might your background and assumptions affect how you think about designing research? Ethics: Much More Reflexive Thinking Still to Do Reflexive thinking will force us to consider another set of related considerations when designing our research. These are considerations related to the ethical dimensions of our research. Research ethics are concerned with moral behavior in research contexts (Wiles, 2013). Thinking through ethics at all points of the research design process is part of responsible research (Kuntz, 2015). It is part of becoming a responsible methodologist (Kuntz, 2015) and a researcher who thinks “about how to become a more responsible author, scholar, individual, citizen“ (Koro-Ljungberg, 2016, p. 126).
Chapter 1 • Research Design   21 Thinking about research ethics and how to put that ethical related thinking into practice impacts all aspects of our research design. This includes thinking about if the research area or the problem related to that area that we are thinking of researching is something that should be researched at all, through to what we report about how we did our research and what our findings are. Lahman (2018) calls this “poking and prying with a purpose into what is good, bad, right, or wrong in research” and in fact uses this as her working definition of what research ethics is (p. 4). In the next chapter, we devote the entire chapter to taking a closer look at how we might put ethical principles into practice when designing our research. For now, the point to hold on to is that we will need to constantly think through the ethical dimensions and implications of decisions we make throughout the entire process of designing research. Thinking with ethics must be a visible and central part of the iterative thinking–based process from which a research design emerges. Ethical considerations sit in, and around, all aspects of the process undertaken to develop a research design. CONCLUSIONS Designing research is about making decisions to transform a research idea into a research plan. These decisions begin the moment that we begin to think about a topic that we want to know more about. This topic is the substantive focus of our research. What specifically do we want to know about this topic and why? What contribution is the research that we are designing intended to make to the development of knowledge in this substantive area? All researchers come to their research (or for our purposes, their research design) with “orienting ideas” (Miles et al., 2014, p. 19). Orienting ideas give a direction for the thinking that is done when designing research, as well as when putting that design into action. What we decide about what it is that we want to know more about and why provides the basis for the formation of the questions that our research is being designed to address. Once we have developed those questions, we can then make decisions about how we will obtain the type of knowledge needed to address them. This will involve making decisions about what research methodology, and which research methods associated with that methodology, we will use in our study design. The methodological approach we employ provides the logic and rationale for the methods we choose to obtain the information or data that we need to answer our research questions. However, even when we have decided on those methods and how we will put them into practice, we are not finished making decisions about our research design. We will need to think about what we will do with the information we obtain from putting those methods into practice. This will include making decisions about how we will analyze the data or information produced by those methods, as well as how we will link our findings to the existing body of knowledge about the substantive area our research is being designed to contribute to. Thus, research design involves much more than simply selecting research methods or techniques that can be used to collect data. While research methods are part of a research design, they are not all of it. Rather, research design is a process. The decisions that we have made about our research design at every point when getting from here to there must be transparent—as must the reasons for why those decisions were made.
22  Research Design These decisions and choices include (1) what will be studied, more specifically the research problem and the questions that are asked about, and of, that problem; (2) the type of knowledge that the research is designed to produce, in other words, methodological considerations; (3) the way that that type of knowledge is produced, or more specifically, the methods used to collect and analyze information or data in the research; and (4) what the research is being designed to be able to say something about or be used for. For example, will it support or disprove a theory or proposition, or will it add nuanced or new information to build theoretical understandings? Or will it be used to do both if we are using combinations of methods in our research design? When designing research, our thinking about research design cannot be limited to focusing on putting together some sort of linear plan comprised mainly of data collection procedures and techniques stripped of the assumptions and thinking that gave rise to them in the first place. The thinking underlying the entire process of designing that research remains invisible and undeclared. Research design then becomes reduced to a diagrammatic representation of a linear series of steps or procedures without any accompanying text to explain that diagram and the way that it was developed. Producing, or in many cases simply copying and pasting, a diagrammatic representation and summary of a research design from some sort of textbook (usually about methods) becomes what research design, and designing research, is all about. What is overlooked, or even ignored, in all this is that when you cut and paste a diagram of a research design somebody else has developed, you are also copying and pasting a whole heap of (usually undeclared) assumptions and choices that the person developing the diagram made about, for example, what research is and how it should be done. All research designs are full of assumptions and choices made by the person designing the research. These are assumptions and choices about what the purpose of the research is, what type of knowledge the research design will need to enable to be produced in order to address the research problem, and how that knowledge can be produced using methods and techniques to do so. These assumptions and choices provide the context for understanding the research design and how it was designed. Therefore, such assumptions need to be thought about, surfaced, acknowledged, and declared when we design our research.17 We have covered a lot of ground in this opening chapter. We will return to these ideas at various points in the chapters to follow. Like research design itself, this book is not meant to be read or thought about linearly. Nor are the chapters meant to be read in isolation from each other. Points made in one chapter are returned to and developed in later parts of the book. In the next chapter, we explore in more detail how thinking about ethical considerations is a central part of iterative research design. SUMMARY OF KEY POINTS Research design • is a strategy that guides a specific research project. • is about making decisions about what form various parts of that project will take. • is about linking the purposes of the planned research to how that research will be conducted.
Chapter 1 • Research Design   23 • addresses a specific research problem and related research questions. • is more than the identification of methods or techniques that will be used to collect data. • is made up of theoretical, methodological, and ethical considerations that shape the design. • uses relevant and credible research literature at all points of the research design process to assist in the development of that design. • is an iterative, nonlinear, process. • requires reflexivity on the part of the researcher throughout the entire research design process. • Rather than simply being a set of individual procedures or steps, research design is a thoughtful, reflective, and ultimately reflexive process that constantly requires us to pause in order to consider what we are doing and why. • What emerges from this process is what is called a “research design,” the shape and substance of which is made up of decisions and choices made about a number of areas. • All of these decisions and choices are interconnected and cannot be viewed or made in isolation. KEY RESEARCH-RELATED TERMS INTRODUCED IN THIS CHAPTER reflexivity/reflexive thinking relevant literature research design research ethics substantive area of research theory empirical iterative process methodology methods qualitative approaches quantitative approaches SUPPLEMENTAL ACTIVITIES Try one or both of the following exercises designed to assist you develop the type of reflexive thinking that is central to research design and that you can use to ask yourself questions about research reports you are reading or the research you are designing yourself. 1. Obtain a report outlining the findings of a research study. Look for the level of detail about the way the research was designed and what was discussed and what was not. Here are some examples of what to look for: • What was the research about and why was it about this? • How did the researchers choose to do their research and why did they do it in this way? • Do they say what actually happened when putting aspects of the design into practice, and why this happened?
24  Research Design • Do they talk about any changes in their thinking about the research design during its development and also when putting it into practice? • Do they discuss methodological, theoretical, and ethical considerations that impacted on their research design as it took shape? 2. Journal the decisions you make and why you make them when designing your research. If you are in the process of designing research, as you think about and work through the various chapters of this book, keep a diary or journal of what implications the discussion in each chapter has for the way that you will design that research. For example, after reading this chapter, write about your thinking concerning the role of theory in that design, and what assumptions you are basing that thinking on. Why are you thinking about theory in that way, and how does this impact decisions you might make about what type of data or information you will need as a result of that thinking? The diary or journal then becomes a record of the types of decisions you made, and why, related to the various areas that make up your research design. This provides a record of the reflexive thinking that underpins the design of your project. FURTHER READINGS Becker, H. S. (1998). Tricks of the trade. How to think about your research while you’re doing it. The University of Chicago Press. Lumsden, K. (2019). Reflexivity: Theory, method, and practice. Routledge. NOTES 1. Cambridge online dictionary, https://dictionary.cambridge.org/dictionary/english/ iterative accessed 28/3/2020 2. Merriam-Webster online dictionary defines empirical to mean “originating in or based on observation or experience.” Merriam-Webster.com Dictionary, Merriam-Webster, https://www.merriam-webster.com/dictionary/empirical. Accessed 25 Feb. 2021. 3. For example, in Norway underpinning the annual collection of data about each researcher’s publications is a list of recognized journals, articles from which will be included in that collection. Decisions are also made about which books and book chapters will be recognized based on where they have been published and by which publisher. 4. See https://rxcomms.com/blog/6-ways-spot-predatory-journal/vglasson/ https://authorservices.taylorandfrancis.com/are-open-access-journals-good5. See quality/ 6. Research that is published informally or noncommercially or remains unpublished is sometimes referred to as gray literature. Gray literature can include non-peerreviewed but still useful sources such as government reports, statistics, patents, conference papers, etc. 7. The article with this headline was first published on the digital platform www.daily mail.co.uk of The Daily Mail on 9 July 2018 (https://www.dailymail.co.uk/sciencetech/ article-5934673/Being-rich-successful-really-genes-study-suggests.html). Initially, the article claims that “[S]cientists have found social mobility is partially written into our genes, which can make us high-flyers or high-earners” (paragraph 1). Reading a bit more, the article says that “[t]he authors say our genes explain only roughly four
Chapter 1 • Research Design   25 per cent of differences in social mobility” (paragraph 13) and that “the effect of the ‘genes for education’ on any one child’s life is small” (paragraph 27). 8. See Chapter 5 for what we mean by qualitative approaches to research. 9. This point is picked up and developed in Chapters 6 and 7. 10. See Chapter 5 for what we mean by quantitative approaches to research. 11. This point is picked up and developed in Chapters 8 and 9. 12. These designs are known as positivist or post-positivist; see Chapters 4 and 5 where this is discussed in detail. 13. See for example the discussion of the qualitative research interview in Chapter 6 and the quantitative survey in Chapter 9. 14. See Chapters 6 and 7 for a detailed discussion of this type of interview. 15. Hindess (1996) argues that there have been two conceptions of power that “have dominated Western political thought in the modern period” (p. 1). One “is the idea of power as a simple quantitative phenomenon,” and the “second, more complex understanding is that of power as involving not only a capacity but a right to act, with both capacity and right being seen to rest on the consent of those over whom the power is exercised” (p. 1). Theoretical writing and understandings of power in use may draw on one, two, or several theoretical traditions or variants thereof. 16. If you would like to read further about this, Lumsden (2019) in her Introduction to her book Reflexivity: Theory, Method, and Practice provides a good (and accessible) introduction to, and discussion of, this complex idea. 17. Chapter 11 provides good examples of researchers declaring their hand in terms of the assumptions and thinking that underpin their research design, and discusses why such declaring of your hand is part of being a responsible researcher.
                           
2 ETHICAL ISSUES IN RESEARCH DESIGN PURPOSES AND GOALS OF THE CHAPTER In this chapter, we introduce and develop the idea of research ethics as an orienting idea framing and permeating the entire research design process. We explore the thinking that you will need to do about ethics when developing your research design. We stress the point that thinking about research ethics, just like research design itself, is a constant process. When doing so, we distinguish between research ethics and gaining ethics approval from a research committee. We take a close look at foundational concepts associated with research ethics such as the principles of participant confidentiality, anonymity, and consent. We focus on what we will need to think about when putting these ethical principles into practice when designing our research. Ethical challenges and dilemmas arising from contemporary trends such as the repurposing and sharing of existing data, and the increasing rapid digitalization of many aspects of social life, are also explored. In addition, we discuss the role of ethics committees, emphasizing that they are more than simply regulatory authorities. Throughout the discussion we highlight that thinking about ethics is an important part of the reflexivity1 that underpins the entire research design process (see Chapter 1). Such thinking impacts on all aspects of the development of our research design, from the moment we have an idea about a possible research problem, to well after we complete our research including what we say, and how we write, about that research. Having this discussion early in the book (the second chapter) is a deliberate choice made to emphasize the centrality of reflexive thinking with ethics when designing our research. We develop the ideas and issues introduced in this chapter throughout the rest of the book. In this way, we model and capture an iterative way of thinking with ethics when designing our research. This is important because ethical matters related to research design are multifaceted and dynamic—they do not keep still and often emerge as a research design develops and then is put into practice. The goals of this chapter are to • Introduce the idea of research ethics. • Focus on the thinking that you will need to do about ethics when developing your research design. • Explore participant confidentiality, anonymity, and consent as foundational, but at times problematic, concepts in research ethics. 27
28  Research Design • Consider the impact that sharing or repurposing data, using digital platforms when collecting data, or using nonresearch generated sets of digital data as part of our research design has on putting ethics into practice. • Explore the role of ethics committees. • Emphasize the centrality of reflexive thinking with ethics when designing research. • Provide examples that illustrate the impact that thinking about ethics has on the way we design our research and put it into practice. WHAT IS RESEARCH ETHICS? From the minute we begin thinking about what we might study and why, and then how we will study it and why, we are constantly interfacing with ethical considerations. Ethics is a term that all readers will have come across at some stage. But what exactly are we talking about when we talk about ethics or research ethics? Answers to this question range from whole books, to a few sentences, and all places in between! Wiles (2013) provides us with a useful entry point and orienting framework for exploring this question. She writes, “Ethics is the branch of philosophy which addresses questions about morality” (p. 4). This then raises the question of what morality is, and how morality is linked to ethics. Morality focuses on the “personal set of values and beliefs which guide self-discipline (including respect for others). . . . Ethics is an attempt to codify and regulate morality by stipulating norms and principles for behaviour” (Duncan & Watson, 2010, p. 50). It is not always easy to distinguish between morality and ethics as types of reasoning. This is because they are so closely linked, and because they are often used interchangeably. Research ethics are concerned with moral behavior in research contexts. Principles and issues often identified in relation to thinking with research ethics in the design and conduct of research include “respecting human dignity, respecting persons, and being concerned for welfare and justice” (van den Hoonaard & van den Hoonaard, 2013, p. 15). Therefore, you will often see research ethics discussed in terms of doing no harm to participants (nonmaleficence); having some positive benefit to participants or society (beneficence); and respecting participants’ decisions about what happens to them in the research, including if they choose to participate in it (autonomy or self-determination). In addition, no participant, or group of participants, should be either advantaged or disadvantaged over others (Anderson & Corneli, 2018). Putting ethical principles into practice when designing research will require us “to continually reflect on the ethical implications of researching people’s lives” (Duncan & Watson, 2010, p. 52) throughout the entire design process. This includes thinking about and asking ourselves questions such as these: What will we tell our participants about our study and when? What will we write about what they tell us—how and why? What will we do if our research uncovers issues of abuse or illegality? How will we look after our data and who has access to it? What will we say about how and why we did our research in a particular way? What will we include in, and what we leave out of, the reporting of our research and why? Such reflexivity is central to developing what van den Hoonaard and van den Hoonaard (2013) refer to as a researcher’s “inner ethical poise” (p. 13). In the discussion to follow, readers are encouraged to develop awareness of, and explore, both their inner ethical poise and why that awareness matters in terms of the way we think and act when designing our research.
Chapter 2 • Ethical Issues in Research Design   29 Regardless of which research methods we intend to employ in our research design, we will be forced to consider and think reflexively about what van den Hoonaard and van den Hoonaard (2013) describe as “keystones in any ethical research” (p. 39), namely participant confidentiality, anonymity, and consent. They note that these keystones form a complex triangle of connected ethical related issues. This triangle is bent into various forms by the circumstances and context of each individual research project and design. For “[w]hile there are a number of ‘common’ ethical issues [such as informed consent, anonymity and confidentiality, and risk and safety] . . . research is always situated and contextual and the specific issues that arise are often unique to the context in which each individual research project is conducted” (Wiles, 2013, p. 9). In the next sections of this chapter, we will take a closer look at what it means to put these keystones of ethical research into practice when designing our research. PUTTING INFORMED CONSENT INTO PRACTICE Before we begin collecting information from people in our study, it will be necessary to gain their consent to both collect and use that information. How to obtain that consent and how to ensure that it is informed consent must be a central part of our thinking from the outset when we are designing our research—not an afterthought. But what is informed consent? Informed consent is when participants agree or consent to participate in the research. It means that the participant understands both that they are giving consent and what they are consenting to. Such consent relies on full disclosure by the researcher of what participating in the research involves. To make an informed decision about whether to consent to participate in a research study, potential participants must be given both transparent, and sufficient, information in an appropriate form about the study. This is so that participants know • what the study is about; • who designed the research and decided the way it will be done; • what their role as a participant in the study will be; • who will actually do the research or collect information from them; • that they can decide whether or not they want to participate; • that they have the right to withdraw, or withdraw any information collected about or from them, at any time from the study without having to give reasons; • what benefit they may or may not receive from being in the study; • who else might benefit from the research and how; • any risks that they may encounter by participating in the study; • any costs to them (including time) by participating; • what will be done with the information that they provide to researchers; • who the researchers are, their affiliations, and the source of any funding for the research received.
30  Research Design Each of these points should be thought about and addressed as you are designing your research. This means that you will be thinking with, and about, ethics at all stages of the development of your design. To be able to give clear information about each of them means making our research design, and our thinking about that design, as clear, honest, and transparent as possible to potential participants. This includes the actual research questions, the aims and the rationale for the research, the ways in which the research will be conducted, and what we will tell about the research and to whom, after it is completed. For example, we cannot deceive people about aspects of our research by telling them that our research is for one thing when in fact it is for another. Nor can we imply some advantage to participate when that might not be the case, therefore raising false hopes and expectations on the part of our participants.2 This is because an important part of informed consent is that the consent is given freely and not because the participant is offered inducements (such as financial gain) to participate in the study, nor because there are negative consequences for them if they do not participate. Informed Consent—Who, What, and When We will also need to think about how much information, in what form, to give, when and to whom (Wiles, 2013). For example, we will need to think about how to keep the language that we use when describing our study in information sheets and consent forms, for example, as concise and simple as possible. Thought will also need to be given to participant group appropriateness of that information in terms of the way that the information is presented, including the assumed level of literacy of the reader. A way of doing this is to ask “persons from the sample population of interest to review the consent form(s) prior to using it” (Lahman, 2018, p. 74). PUTTING IT INTO PRACTICE SEEKING CONSENT IN AN APPROPRIATE WAY Lahman’s suggestion to ask “persons from the sample population of interest to review the consent form(s) prior to using it” (Lahman, 2018, p. 74) could usefully be extended to include asking these persons from the sample population of interest to also review the way in which that consent will be sought. For example, is a written consent form the best means of gaining consent in the population or community of interest? There may be culture-specific considerations that we need to take into account that make the use of a written consent form inappropriate. 3 Persons from the sample population of interest could also be asked to comment on from whom that consent needs to be gained. For example, Duncan and Watson (2010) found, when researching in different cultural contexts, that in some communities it was necessary for them to obtain verbal community consent before attempting to get written consent from individuals in those communities. Sometimes when conducting our research, the design of the research may change. For example, in many qualitative research approaches, aspects of the research design emerge as the study progresses. This can make giving information about exactly what participating in a study may mean problematic.4 Therefore, in such cases we will need to consider
Chapter 2 • Ethical Issues in Research Design   31 when and who we will need to get informed consent from and overtly incorporate these considerations into every part of our research design including constantly reviewing them as the design of our research evolves. The same consideration applies if our study design is some form of longitudinal study: In longitudinal studies or research with repeated stages of data collection it may be appropriate to provide information, and gain consent, for each stage of data collection. . . . This approach highlights the importance of viewing consent as a process that is ongoing throughout a project rather than as a one-off event (Wiles, 2013, p. 28). Other issues related to informed consent arise if we intend to reuse data that has been collected in other studies. We return to this point in a later section of the chapter. Informed Consent in Relation to “Vulnerable” Populations There are some individuals and groups of people formally deemed “vulnerable” by ethics committees, government regulatory authorities, and professional associations. For example, the Declaration of Helsinki5 explicitly states that “[S]ome groups and individuals are particularly vulnerable and may have an increased likelihood of being wronged or of incurring additional harm. All vulnerable groups and individuals should receive specifically considered protection” (World Medical Association, 2018, item 19). In general, a group is considered vulnerable if there is good reason to believe that individuals in that group may, for some reason, have difficulty providing free and informed consent to participate in research. In the Declaration of Helsinki (World Medical Association, 2018), individuals incapable of giving informed consent, individuals likely to consent under duress, individuals with increased likelihood of incurring additional harm, and individuals that do not benefit from the results of the research are all considered vulnerable. TIP BE AWARE OF LOCAL UNDERSTANDINGS AND REQUIREMENTS RELATED TO WHO IS DEEMED VULNERABLE Individual countries may also designate who vulnerable groups are. For example, in the United States, groups designated as vulnerable populations include “children, prisoners, individuals with impaired decision-making capacity, or economically or educationally disadvantaged persons” (Department of Health and Human Services, 2018). Therefore, it will be important to make sure when designing research that you are aware of the local understandings and requirements related to who is considered a vulnerable participant and what that means for gaining consent. The use of the designation “vulnerable,” while well intentioned, has been critiqued on a number of grounds. One ground is that this designation may actually work against the best interests of groups deemed vulnerable. This is because increased regulation and requirements for accessing participants may make research with these groups increasingly difficult and thereby reduce the amount and scope of research focusing on these populations. This can lead to the situation described by Markham et al. (2018) where “systems of
32  Research Design research and ethics governance do not facilitate and support social research that needs to be done, because it has been traditionally been [sic] prohibited” (p. 3). Another criticism is that the designation of vulnerable may take away the right of individual participants in these groups to participate in research. They may be excluded on the assumption that they are vulnerable (predetermined by others’ definition of that term), and therefore not competent (predetermined by others’ definition of that term), to give informed consent (also predetermined by others’ definitions of that term). The effect of this is that large groups of people deemed as being vulnerable such as children or those with cognitive disabilities “who all in various ways could stand to benefit from having their living conditions elucidated by research” (Juritzen et al., 2011) are in effect excluded and their rights and interests further marginalized. All of this raises a series of questions around informed consent for which there are not clear-cut answers. These questions include who decides whether a person is competent to participate in a study. Particularly if the study is about something that the person has knowledge of and wants to participate in, who makes that decision? Whose rights predominate? Who is able to speak for whom and when? Reflexively thinking about these questions and their effect on the way that we design our research can help us avoid some of the taken-for-granted assumptions that we might otherwise make about informed consent that on the surface may seem quite reasonable. It can help us shift the focus of our thinking from whole of group vulnerability to vulnerability of individuals, or groups of individuals, within that whole of group designation. In so doing, it can shift the emphasis in our thinking to the idea of research participants as “capable and competent yet vulnerable at the same time” (Lahman, 2018, p. 13). This can open up possibilities for the inclusion of people in our research that otherwise may be excluded because of standardized and normalized understandings of what it means to be vulnerable. PUTTING CONFIDENTIALITY AND ANONYMITY INTO PRACTICE The ideas of confidentiality and anonymity are closely related. At times, you may even see these terms used interchangeably. While there is no doubt that they are linked, they are not the same. Confidentiality is wider in scope than anonymity. Keeping a research participant’s identity anonymous is part of confidentiality. However, confidentiality is not ensured by keeping the participant’s identity anonymous. For example, it is possible to breach confidentiality if information from an anonymized participant is reported or used in some way in the research when that participant specifically requested that it not be used (Wiles, 2013). Confidentiality is more than just not disclosing the name, identity, or identifying features of participants. It is also about the way that any data that a participant has provided or is related to that participant is shared or not shared, and with whom. Anonymity is used by researchers to protect the identity of the participants in their study. In research design, anonymity often involves using pseudonyms when referring to participants or sites in the study. A pseudonym is a fictitious name given by researchers to participants or sites in a research study. The pseudonym is used instead of their real name when reporting and discussing the research. However, even such a seemingly benign and simple process requires some careful thought when designing your research, and in itself should not “be confused as equal to ethical research” (Lahman, 2018, p. 83).
Chapter 2 • Ethical Issues in Research Design   33 For example, one thing we will need to think about is what pseudonyms will be used and who will choose them and how. Will we ask participants to choose their pseudonym or will we as researchers do this? If we choose the pseudonym, what impressions, intended or otherwise, might be given about a participant by assigning them a particular pseudonym? Being able to rename someone involves the use of power (Hurst, 2008; Lahman, 2018). PUTTING IT INTO PRACTICE WHAT DO YOU DO IF IT IS IMPOSSIBLE NOT TO IDENTIFY THE SPECIFIC SITE OF A STUDY? Sometimes, it may be impossible not to identify the specific site of a study. For example, a master’s student, Christine Moe Grav (Grav, 2015), that one of us supervised wanted to study the effect on a group of middle managers in a specific government department of having to implement a government-mandated departmental structural reform that would result in them losing their position. In other words, these managers would have to manage themselves out of their jobs since the result of the reform would be that there was no more middle management. There were 15 of these middle managers in this specific department. Christine was interested in how these 15 managers experienced this process and what those experiences could tell us about the process of change management. The problem she faced was how to do this study in such a way that it ensured confidentiality and anonymity for this group of 15 participants even though it was impossible to do the study without identifying the actual reform and therefore the department that the reform affected and therefore the 15 managers. Christine protected the confidentiality and anonymity of each individual manager in terms of what information they contributed to the study by making it impossible to identify which manager had said what. This was done by not linking any demographic data to individual participants or to the information they provided for the research. If Christine had linked the participant or the information they gave in their interview to demographic data collected about them specifically, such as length of time in the position, it would not have been difficult for a reader to cross-reference that information to the 15 managers and figure out who had said what. By providing ranges of length of time in the position rather than listing experience levels for each individual anonymized manager, Christine was able to provide a demographic snapshot of this group while preventing specific information being able to be linked to individual managers. When citing her participants, Christine did not include any demographic data about the participant in question. Sometimes we may need to consider what we will do if a participant does not want to be given a pseudonym but wants their real name to be used or their identity to be linked to the data collected about them. In these instances, it is important to make sure that participants understand what having their real name used will mean in terms of the way that the information is presented in public arenas such as publications, presentations, and reports. It means making it clear that it will be possible to identify them and link the information that they gave, their data, to them. The Use of a Pseudonym Does Not Necessarily Ensure Anonymity We will also need to remember that in itself the use of a pseudonym for individual participants does not necessarily ensure anonymity for participants. For example, if you identify
34  Research Design the specific region or town or organization or sector that the study is located in, it might not be difficult for people to backtrack and identify who some of the participants might have been in the study. This is particularly so if we collect demographic data about our participants and directly link that data to other data collected in the study. An example of this would be if we report excerpts from qualitative interviews in the following way: “managers here are not good at explaining things” (Julia, Female, 38 years, 2 years’ experience, mid-level manager). While Julia is not the person’s real name, the demographic data is real. Therefore, if the site of the study can be identified, it will not be difficult to work out who “Julia” is. This can also be the case even if the data are reported only using the pseudonym itself, for example, “managers here are not good at explaining things” (Julia). This is because if in a table of demographic data, each anonymized participant has been listed and their demographic data attached to their pseudonym, then it is a simple exercise to backtrack and connect that demographic data to the participant. Consequently, Morse (1998) suggests reporting ranges of the demographic data of the entire participant group (e.g., age ranges), rather than specific demographic data of each individual participant (e.g., age). She also suggests that we “do not attribute each quotation to a particular participant, unless there is a compelling reason to do so” (p. 302), arguing that the researcher will have selected particular quotes as illustrative exemplars of their findings and so attaching a quote to an individual (even with a pseudonym) is not necessary. This does not mean that demographic data about the individual participants in a study cannot be collected or used in a study. The key point is that specific demographic data, no matter what methods are used, must not be linked to individual participants in a way that it could identify those participants if the participants have been promised anonymity when participating in the study. It also protects the confidentiality of what that anonymous person has said. PUTTING IT INTO PRACTICE Navigating Ethical Issues Is Not Always Straightforward Our discussion of putting confidentiality and anonymity into practice has highlighted that this is not always straightforward even if you have obtained informed consent and taken steps to ensure the anonymity and confidentiality of the participants and sites in your research. For example, some form of observation is often used by researchers as a way of obtaining information about a particular social setting of interest.6 Different sorts of issues arise when making those observations, depending on what type of observations you make and where. Issues may also arise from your level of involvement in what is being observed. Therefore, issues that you will need to think about include • • if you intend to make observations part of your study design you will need to consider in what capacity you will make those observations. For example, will you make those who are being observed aware that they are being observed or what is being observed or why? How does your decision about this impact on informed consent and also the privacy of those being observed? if you are going to research people in public places such as shopping centers or public streets, will you need to get consent from every person you observe?
Chapter 2 • • • Ethical Issues in Research Design   35 Given that in a shopping center or public street it may not be possible to do this, does this mean that it is not possible to do this type of observational research? if when you are making your observations you observe an interaction or behavior that is concerning in some way— for example, bullying or sexual harassment— what will you do? What about if you observe what you consider to be inadequate or incompetent professional practice? Should you intervene or report that behavior? What about anonymity and confidentiality considerations in all this? if you are a student and then return to debrief your observations with others and your analysis of those observations, how will you ensure the anonymity and confidentiality of that observational data when you talk about it? You will need to think through and take up your own ethical position in relation to these types of dilemmas and justify what you choose to do as part of your research design. Although we have used observations as the vehicle for the discussion here, the same sorts of considerations apply to other methods used to collect data from and about people, such as interviews and surveying. There is no simple answer to these types of questions. However, a good piece of advice is given to us by Ryen (2011): There are no standard answers to these dilemmas. . . . We need to be prepared for all these challenges, which demand that we put them on the agenda from the very start of our projects and ask ourselves how they relate to our own particular project. But what should we do? A good piece of advice is always to invite experienced researchers with particular knowledge in research ethics and in your field to discuss matters with you. (pp. 418–419) WHAT YOU NEED TO THINK ABOUT WHEN REUSING, REPURPOSING, AND SHARING DATA Some research designs involve “repurposing, reusing, combining, sharing and linking data in new ways” (Ballantyne, 2019, p. 357). Indeed, there are increasing calls that researchers should share their data (Meyer, 2018), resulting in pressure directly and indirectly being placed on them to do so. Some journals and funding bodies are requiring (or at least strongly encouraging) some sort of data sharing as a prerequisite for publication or funding. For example, the UK Economic and Social Research Council Data Policy, last updated in 2021, states that “[p]ublicly-funded research data are a public good, produced in the public interest, which shall be made openly available and accessible with as few restrictions as possible . . . for future research” (Economic and Social Research Council, 2021, principles 1 and 2). One of the central questions that the reuse or sharing of data as part of a research design raises is who decides that is it OK to share and re-use that data, and what can or cannot be shared. As Flick (2015b) puts it, If we start interviews, for example, by asking participants for their (informed) consent—are such permissions for doing research or the informed consents obtained valid for re-use of data for all purposes and for every other researcher? What does this mean for clearances by ethics committees? (p. 604)
36  Research Design To address these important questions requires us to consider what implications the repurposing of data has for one of the keystones of ethical research, namely informed consent. A central consideration is whether a participant’s consent for reusing, repurposing, and sharing their data can ever really be informed. When consenting to the reuse of their data, research participants will not necessarily know exactly what their data will be reused for, or by whom. This raises the possibility that participants’ data could be used in a study that they would not have consented to their data being reused in, because of, for example, who is doing the study or who is funding the study or what the aims of that study are. The potential difficulties of obtaining informed consent for the use of data from previous studies have led some commentators to suggest that the primary question for the reuse of such data is not “‘[C]an we get consent’, but rather ‘Does the public interest in using the data outweigh individual interests in controlling access to the data’?” (Ballantyne, 2019, p. 358). However, this suggestion raises yet another set of questions related to informed consent that will need to be thought through and about. Questions such as, “Is there an upper limit to the risks that individuals should bear for the sake of the public benefit of data use? What legitimizes decision-making processes? Who should be held accountable for data misuse and how?” (p. 365). To these questions posed by Ballantyne we can add even more related questions such as, What does an upper limit to the risks those individuals should bear for the sake of the public benefit of data use actually mean—is that harm OK in some instances? Who will decide what this upper limit is, will it be applied equally to everyone, and will we know that upper limit when consenting to our data being reused or repurposed? Similarly, who will decide what the public benefit of data use is? Will this be a legal or regulative or ethical, individual, or collective decision? How will it be enforced? Moreover, again, who will decide this, how, where, on a case-by-case basis or . . .? Therefore, if when designing your research you are thinking about using data for which the original consent did not explicitly include a clause or statement about data sharing or reuse, then you will have even more thinking to do. This relates to whether the more “effective” and “efficient” use and reuse of data for some sort of greater public good should outweigh the rights of each individual participant to determine the way that their data are used and who has access to it. How to Address These Types of Questions? Beck (2019) suggests that using multiple layers of consent is one way of addressing, or at least beginning to think about, how to address the ethical issues that arise around informed consent when participants’ data collected for one purpose is used for another. Such layered consent may make it possible for the participant to have more control in relation to the extent of, and purpose for, the reuse of their data. This is because multiple layers of consent would enable research participants giving informed consent for a specific study (the primary study) also being explicitly asked whether they consent to some or all of (1) participating in the primary study and only having their data used in that study, (2) the researchers in the primary study being able to reuse that data in later studies, and (3) the archiving/storage of that data which may then be accessed by other researchers and reused in other studies. However, even if multiple layers of consent are used, the issue that still arises is how much information we need to give about each of these layers before we can say that any
Chapter 2 • Ethical Issues in Research Design   37 “consent for reuse” (Bishop, 2009, p. 262) that is given really was informed? How explicit does information about each of these layers need to be about what data will (and won’t be) shared, when and how, and with whom, as decided by whom, in order to be able to claim having informed consent for the reuse of that data and/or use of that data in a specific secondary study? There are no easy or “correct” answers to questions such as these. However, if you are considering using a research design that involves data sharing and/or reuse and/or the archiving of your study’s data for future researchers to access and use, you will need to think through these questions. You will also need to declare how you have designed your research with these questions in mind. This is not easy to do as “[b]alancing the rights and responsibilities of the primary researcher and the research team, secondary researchers who want to make use of the data, the data archivist, research participants, research funders and the general public may present significant challenges” (Wiles, 2013, p. 88). Thus when designing research that involves some form of data sharing or repurposing, it is important to think reflexively, not simply linearly, about the use of that data after it is collected. This includes after your research is concluded. WHAT YOU NEED TO THINK ABOUT WHEN USING INFORMATION ON THE INTERNET AS DATA The advent of the internet and the rise of digital data has refocused researchers’ attention on concepts like “privacy” and “informed consent.” It raises questions such as, “When should someone expect confidentiality on the internet? When should a researcher seek a participant’s permission to conduct research on the internet?” (Lahman, 2018, p. 209). For example, if information on social media or social networking sites is “harvested,” “mined,” or “collected” in some way and used as data in a research study, does that mean that consent needs to be given for the collection and use of that data? If so, how might this be done? For example, what if someone collects digital traces of our everyday activities, again without our knowledge or consent, and aggregates them into massive sets of digital traces known as Big Data?7 Such digital traces can be in the form of text, videos, and images taken, for example, from social media (such as Facebook or Twitter), personal blogs, chatroom conversations, and other forms of internet communities. The digital traces also include data transmitted by microchips embedded in “clothing, products, credit cards, and passports” (Mills, 2018, p. 596), Internet of Things,8 as well as records of some form of activity generated by activity bracelets and individual health monitoring devices. Then, what if that Big Data set is used as the basis for algorithmic analyses that are used to predict aspects of our behavior such as our political leanings or likely consumption patterns?9 Who regulates or makes decisions about this? Who actually owns that data, and who can, and cannot, give consent for digital traces to be used for research and other purposes? If we are planning to use this type of data, these questions need to be thought through carefully. Blurring the Boundary Between Public and Private The accessibility of digital data, such as digital traces of our daily activities, forces us to consider what is public information and what is not? In the digital age, this is not so easy to answer given that “much of Internet behavior is both private (person in their home residence) and public (person is active in the Internet) simultaneously” (Lahman, 2018,
38  Research Design p. 200). For example, in a study designed to reveal the potential of large datasets, Danish researchers Emil Kirkegaard and Julius Bjerrekær (2016a, 2016b) obtained access to a large set of data by pretending to be looking for partners on a dating site. In fact, their intention in joining the dating site was to scrape10 the data in the user profiles of members of the dating site OKCupid.com. From the information they obtained, they developed a publicly available dataset. The researchers justified not obtaining informed consent to access this information by claiming they were merely presenting already publicly available data and hoped that other researchers would use the dataset to address other research problems. They stated that a dataset collected by researchers is too often “not used to its full extent” (Kirkegaard & Bjerrekær, 2016b, p. 1), and this slows down “the progress of science immensely because other researchers would use the data if they could” (p. 1). The ensuing controversy was massive. OKCupid users, academics, online commenters, and others accused the researchers of making confidential and sensitive private-user information public. However, the researchers continued to argue that the dataset was presenting already publicly available data. This case raises the question of whether an internet-related activity, such as becoming a member of OKCupid, is private or public or both private and public simultaneously. It also raises the question of even if something is public, does that mean that it is OK to use private information made public as research data. The increasing use of data from the internet by researchers gives rise to many questions about what is public and what is private. For example, • when someone posts information about themselves or others on social media such as in tweets or blogs or conducts a conversation in a social networking site such as Facebook, who can use that information and what for? • when people post information about themselves and aspects of their lives on public domains on the internet, where the information is accessible by all, does that mean that this information can be used as data in our research studies without asking the people who posted it if they consent to this? In other words, does the fact that this information is being used formally as research data change the thinking about ethics we need to do? • what about “public” data in a closed or private internet community, such as OKCupid? • how do we make that decision? • who makes that decision? • given that digital traces have no geographical boundaries, what formal regulatory requirements will need to be met? Further complicating matters when trying to address the issues that the points raise is the rapid rate of growth and change in both the development of technology related to digitization11 and the use to which that digitization can be put. This includes commercialization of data such as the buying and selling of sets of digital traces.12 Who should profit from that sale? Is this exploiting the people from whose activities the digital traces were harvested? Who owns the data? During the time that has elapsed between us writing
Chapter 2 • Ethical Issues in Research Design   39 this chapter and you reading it, there is no doubt that many new and different questions will have emerged around thinking about ethics in relation to digitally based research and informed consent. It is not possible to predict where discussions of the issues already being raised by some researchers and scholars about the possible need for changes in our thinking about consent and privacy in a digital age will take us. None of this means that the principles about the importance and centrality of the idea of informed consent and privacy (or any other ethically related matter) we have discussed previously in this chapter are outdated or no longer relevant in light of a fourth industrial revolution and increasing digitization more generally. Quite the opposite! Rather, all of this reminds us of, and returns our thinking to, a point made earlier: “Ethics is not a static event but a continual process” (Sparkes & Smith, 2014, p. 206). New and different issues constantly arise related to informed consent, anonymity, and confidentiality. This includes research that is driven by various forms and aggregations of digitized data. WORKING WITH ETHICS COMMITTEES In many countries, formally constituted ethics committees also known as Institutional Review Boards, or IRBs, regulate ethical matters relating to research design and conduct. When designing your research, you will need to find out what these formal regulatory requirements are, consider them when designing your research, and obtain the necessary ethics committee approval to proceed with your research. This should happen before you contact potential participants or commence collecting any data. In the case of multiple-site research studies, you will need to ascertain if you are required to submit your research design to several ethics committees if each site has its own ethics committee. You will also need to find out what form your submission should take. Is there a template that you are required to follow? Does the template vary from committee to committee? Ethics committees, and their underpinning understandings of research, are often influenced by the traditions of medicine. This is because “the original focus of IRBs and the context from which they emerged was that of medicine and the scientific discourse that underpins medicine” (Cheek, 2008, p. 57). For example, in the United Kingdom, the Royal College of Physicians in 1967 recommended that all medical research be subject to ethical review. By 1991, every health district was required to have a Local Research Ethics Committee (LREC). In addition, Multi-centre Research Ethics Committees (MRECs) emerged as a means of helping streamline proposals that otherwise would have to go through numerous LRECs (Ramcharan & Cutcliffe, 2001). Similarly, in the United States, the impetus for the initial development of IRBs also came from medicine and the science underpinning medicine. Increasing public and government concern over unethical research studies, such as the Tuskegee Syphilis Study,13 ultimately resulted in the Belmont Report from the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research in 1979. This report detailed fundamental ethical principles for the conduct of human research: beneficence, respect, and justice. These principles are what shape IRB standards in the United States today. Ethics committees, such as IRBs, consider how a proposed research design puts principles such as beneficence, respect, and justice into practice. For example, they consider whether the research is worth doing in terms of producing useful and significant findings and outcomes. They also examine whether the proposed research design will enable
40  Research Design trustworthy results. This means that the methods to be used to collect data from participants need to have been clearly explained. This includes recognizing potential issues that may arise from those methods about informed consent, anonymity, and confidentiality and how they will be addressed. Particular attention is often focused on information sheets or sufficient information in some form that participants in the research will be given about the research before they sign a consent form or give consent in some other acceptable way to participate in the research. Ethics committees are also interested in how data and information related to the study will be stored and who will have access to that data. This is important because it can be a seemingly “ordinary” thing such as storing data on a laptop that destroys the entire confidentiality and anonymity of a research project and compromises its ethical integrity if, for example, the laptop is stolen during the research. Focusing on the Principles, Not the Requirements When thinking about where research ethics “fits” into the research design process, many researchers immediately focus on what they must do in order to gain “ethics approval” from an ethics committee or review board. As a result of this sort of focus, a form of shorthand terminology such as “getting ethics” has emerged as part of the research design process. The danger in the consolidation of this type of shorthand terminology is that thinking about ethics is reduced to how to meet the requirements for ethics committees’ approval. “What are the requirements, and have they been met?” becomes the focus rather than the ethical matters themselves. This type of thinking can lead to “checklist” type thinking about ethics. In checklist thinking, ethical thinking becomes focused on ticking the boxes by “passing” or “meeting” those specific requirements. Often the thinking that sits behind those requirements is lost, or at best becomes a secondary focus with the main goal being to get the approval. Ethical matters and thinking through ethics throughout the research design process cannot, and must not, be confined or reduced to a static list of standardized steps to be followed that can be made into a checklist and then ticked off as complete one by one. For example, do I have a consent form, information sheet, and so on. In such thinking, the often unstated assumption is that if you address all the things on the list and get a tick next to each of them on some sort of literal or mental checklist, then your research design is “ethical,” ethics “approval” has been gained, and your thinking about “ethics” is done. In fact, what actually has been gained is approval to proceed ethically. If the thinking about ethics when designing research is reduced to a focus on meeting the requirements of an ethics committee or checklist, the danger is that the focus on the ethical issues themselves, or ethics in relation to the particular research being designed and conducted, can be lost once the approval is gained. Just complying with predetermined procedures, such as the format that an information sheet must take, may not necessarily ensure that the design and conduct of the research is ethical. Research design, the research process, and therefore the thinking through ethics that influences every part of that research process are not static but continually unfolding and developing—this is why your thinking about ethics is not finished when you have met the requirements of an ethics committee—in many ways it has just begun. New ethical issues may arise, or change during the life cycle of your research, forcing you to revise your initial research design.
Chapter 2 • Ethical Issues in Research Design   41 The key point that emerges from all this is that there is much more to research ethics and ethics committees than regulation and procedures. To focus our thinking about ethical matters related to research design solely on meeting or “passing” the procedural requirements of an ethics committee runs the risk of standardizing and limiting thinking about both ethics and ethics committees, reducing them to matters of procedure—where ethics is “got” or the requirements “met.” Such thinking reduces the role of ethics committees to an administrative focus and loses the important contribution that they can make as educational and scholarly bodies concerned with advancing conversations and understandings of ethical research itself, thereby developing researchers’ “ethical literacy” (Wiles, 2013, p. 1) and inner ethical poise. Activity Finding Out About Professional Ethical Guidelines Thinking with ethics with respect to your research design will also need to take into account relevant “[p]rofessional ethical guidelines and codes [containing] disciplinary norms of ethical behavior,” but which “are not legally enforceable” (Wiles, 2013, both quotes from p. 6). Such a code identifies standards of behavior and conduct based on core ethical principles and values of a profession, organizations related to that profession, and those working in that profession. Examples of such guidelines include the ICN Code of Ethics for Nurses (International Council of Nurses, 2021), the Code of Ethics of the Education Profession (National Education Association, 2021), and the Code of Ethics for Social Workers (National Association of Social Workers, 2021). Find out if there are professional ethical guidelines or codes related to the area that your research will be conducted in and think about if, and if so how, these codes or guidelines may affect your research design and the way it is put into practice. CONCLUSIONS Thinking about putting ethics into practice is a reflexive part of the research design process and part of being a responsible researcher. It is irresponsible to design research without constant consideration of ethical matters from the initial idea until the conclusion of that research and sometimes even beyond that, for example, when we are considering reusing a study’s data. When unexpected issues arise during the research, reflexive thinking provides a way of thinking through the issues and seeing them for what they are, making decisions related to that thinking, and being able to justify both to yourself, and others, why you made the decisions you did. These others include the participants in your study, your supervisors if you are a student, the readers of your research, and formal bodies such as ethics committees. Thus, thinking about ethics cannot be reduced to a series of predetermined decisions about whether or not we meet certain predetermined criteria, and therefore can say that something is ethical. Such reductionist thinking reduces research ethics to techniques or points to be checked off on an “ethics to-do checklist.” Checklist thinking loses sight of the fact that “to-do” requirements such as having a consent form or an information sheet,
42  Research Design storing data appropriately, and ensuring confidentiality and anonymity are outworkings of the ethical thinking that sits behind them. In themselves, they are not the ethics of the design. The thinking behind the requirements is lost. Issues and uncertainties can arise when thinking about how to put research related ethical principles and issues into practice. For example, what happens if there is disagreement among researchers about what “ought to” or “ought not to” be done in relation to a proposed research design or what the “right thing to do” is, such as whether a dating profile site should be scraped or not. How do we decide, based on what, if our research design avoids “harm” or is “promoting good”? Grappling with such questions requires reflexive thinking on our part about ideas such as “ought,” “ought not,” “harm,” “promoting good,” and “the right thing” themselves. Why do we think the way we do about each of them; how does this affect the way we think about, design, and actually do our research, and what, and why, we say and report about that research. Thinking about ethics in this way opens up to scrutiny all of the decisions and actions that we take when both designing and implementing our research. Such thinking extends the focus of our thinking through ethics beyond an individual study or research design, or meeting the requirements of a specific ethics committee. It can “push scholars to question the ethical stakes of what is not studied, the questions that are not asked, and the social groups and communities that are not the subject of research” (Blee & Currier, 2011, p. 404). As Macfarlane points out, “[R]esearch ethics is rarely about headline-grabbing incidents of scandal and drama. There is an ‘ordinariness’ about the day-to-day decisions we face which is rarely recognized” (p. 24). Examples of such ordinariness include whether to exaggerate the significance of our research in order to attract funding (Chubb & Watermeyer, 2017).14 Or it might be about being tempted to cut the odd corner, perhaps about the extent of our data collection or the detail we provide in the write-up of our research, or adapting the way we report our research to meet the requirements of a journal. Or it might be any or all of keeping the audio recording going for a few minutes after completing a formal interview in the hope that the interviewee might say something more interesting; promising to send someone their interview transcript to check and never doing so in the (almost) sure knowledge that there will be no consequences; referencing to sources that we may have found in the bibliographies of others but never actually read ourselves; or taking more authorial credit than we should do when working with other, perhaps less powerful or experienced, researchers. (Macfarlane, 2010, p. 24) This ordinariness can also include seemingly benign practices such as cultivating trust, or even friendship, with those participating in our research. What is the motivation for cultivating such trust or friendships? Could it be to get people to participate in our research and give us the information we want from them? We have raised many issues in this chapter that you will need to think about when developing your research design, when putting that design into practice, and after you have completed your research. Therefore, this chapter is not the last word on ethics in relation to research design. Rather it is a first word in an ongoing discussion that aims to provide a platform for the thinking that will need to be done about ethics at all stages of the
Chapter 2 • Ethical Issues in Research Design   43 development of your research design. It provides the foundations for you to develop your own “inner ethical poise” (van den Hoonaard & van den Hoonaard, 2013, p. 13). In this way, the chapter has set the scene for discussions in later chapters of the book about putting ethical principles into practice in the various areas that make up what we call a research design such as choices about research questions, methodology and methods, and what we write, and how we talk, about our research. SUMMARY OF KEY POINTS • Research ethics are concerned with moral behavior in research contexts. • Ethics, and thinking about ethics, is a process that impacts on all aspects of designing research. • Ethical matters related to research design are multifaceted and dynamic. • Three core principles of research ethics are the concepts of informed consent, confidentiality, and anonymity. • Repurposing, reuse, and sharing of research data pose particular issues about informed consent and who has the right to share, or use shared, data. • Emerging forms of digital data create new and different ethical challenges when designing research. • Ethics committees have an important contribution to make in advancing conversations and understandings of ethical research. • Getting ethics approval from ethics committees is not the same as designing and conducting ethical research. • Reflexivity is central to developing a researcher’s “inner ethical poise” (van den Hoonaard & van den Hoonaard, 2013, p. 13). KEY RESEARCH-RELATED TERMS INTRODUCED IN THIS CHAPTER anonymity confidentiality ethics committees informed consent Institutional Review Boards (IRBs) layers of consent – repurposed data pseudonym qualitative research approaches repurposing of data research ethics vulnerable populations SUPPLEMENTAL ACTIVITIES 1. Doing your homework and being prepared for interacting with, and learning from, Ethics Committees Find out which IRB or ethics committee(s) you may need to receive formal ethics approval from before you could begin your research. When doing so, take into account that this may vary depending on from where, and in what capacity, you will submit your research proposal. For example, will it be as a student in a
44  Research Design higher education institution, or will it be as an employee of an organization or both? Remember, if your proposed study involves multiple sites, you may be required to obtain approval from each site. Next, obtain a copy of the guidelines and requirements of those ethics committees. Think about what kind of information they request from you, and what design decisions will you need to have made to address those requests. • Why do you think that this information is requested? • Do the forms all request the same information? • If there are differences between them in terms of the information they request, what are they and why might this be? If you are in the process of developing your research design, begin drafting your application for the relevant ethics committee(s) as you develop your proposal. 2. Thinking about if the means justify the ends when we design our research Thinking about whether the means justify the ends is a central ethical consideration when designing our research. We will explore this point by looking at the example of researchers exaggerating the “impact” of their proposed research when writing an application for research funding in order to improve their chances of attracting funding. Chubb and Watermeyer (2017) studied how researchers wrote the section of their application for funding that required them to identify and demonstrate the “impact” of their proposed research by stating “how they will ensure economic and/or societal returns from their research” (p. 3). They interviewed researchers about what they did and why when writing this section about impact. Answers they received included • “. . . telling a good story as to how this might fit into the bigger picture. That’s • • • • what I’m talking about. It might require a bit of imagination, it’s not telling lies. It’s just maybe being imaginative.” (p. 8) “If I want to do basic science I have to tell you lies.” (p. 5) “It’s virtually impossible to write one of these grants and be fully frank and honest in what it is you’re writing about.” (p. 5) “I don’t think we can be too worried about it. It’s survival. . . . People write fiction all the time, it’s just a bit worse.” (p. 6) “People might, well not lie but I think they’d push the boundaries a bit and maybe exaggerate!” (p. 9) Here we see academics justifying what is at best exaggerating, and at worst lying, when writing this impact statement. Justifications such as they were just being imaginative or creative, or this is just part of what one has to do to get a grant. This raises a dilemma for researchers when funding is needed for the research to be able to proceed. If the funding is not gained, then the research cannot proceed. Therefore, how far are researchers prepared to go to get that funding? Does gaining the funding and therefore being able to actually do the research outweigh exaggerating the impact of the research? This is an ethical matter and consideration.
Chapter 2 • Ethical Issues in Research Design   45 Think about and discuss the following dilemma that arises from this situation: • As a researcher, do you write the impact statement (or any other part of the proposal for that matter) “to fit the requirements of funders . . . on the surface a seemingly commonsensical or ordinary position to adopt if winning the funding is the goal” (Cheek, 2017, p. 31)? After all, this is something everyone else is doing anyway. • Or do you choose not to write your proposal in line with those requirements and therefore effectively make yourself and your research uncompetitive in the funding competition? Therefore, your project which may have significant benefits to the population of people in it, will not proceed. • Is there a middle course of action in all this? FURTHER READINGS Cheek, J. (2010). Human rights, social justice, and qualitative research: Questions and hesitations about what we say about what we do. In N. K. Denzin & M. D. Giardia (Eds.), Qualitative inquiry and human rights (pp. 100–111). Left Coast Press. Lahman, M. K. E. (2018). Ethics in social science research: Becoming culturally responsive. SAGE. Wiles, R. (2013). What are qualitative research ethics? Bloomsbury Academic. NOTES 1. See Chapter 1 for an extended discussion of reflexivity. 2. For example, when participants are consenting to be part of a research project testing the effects of a certain drug, one group of participants will be given the drug being tested and the other group a placebo. Informed consent requires us to tell participants that the design of this research involves some of the participants in the study being randomly allocated to the group being given the placebo and not the active drug. This might be them. In this way, potential participants are not deceived into thinking that by participating in this study there will necessarily be the chance of any benefit to their individual health. This is especially important if the participants are in poor health and looking to participate as a chance of improving their health status. Further, it is equally important to make clear that even if they are randomly allocated to the active group, and receive the drug, the drug may have no, or even adverse, effects. 3. See Lahman 2018, pp. 79–81, for a good discussion of this. 4. See Chapter 5 for more about flexible and emergent study designs associated with qualitative research approaches. 5. The Declaration of Helsinki is “a statement of ethical principles for medical research involving human subjects, including research on identifiable human material and data” (World Medical Association, 2018, item 1). 6. In Chapters 6, 7, 8, and 9 we take a closer look at what observations are and can be used for in your research design. 7. Defining exactly what Big Data is, is not easy as there is much confusion around the use of this term. Most often Big Data is associated with the combination of many digital traces. It is this combination of digital data traces which is “big”—in terms of the amount and diversity of sources of those digital traces.
46  Research Design 8. “The internet of things, or IoT, is a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers (UIDs) and the ability to transfer data over a network without requiring human-tohuman or human-to-computer interaction” (IoT Agenda 2016). 9. See for example Tubbs (2018, paragraphs 6–7) for a discussion on big data and marketing: Publishers are gaining more data on their visitors and this allows them to provide more relevant advertising. Google and Facebook are already doing it with their amazing targeting options but third party vendors will soon have the same array of choices. You could target people based on their recent searches, articles they read, lookalike audience and so on. There is no limit to the impact big data is making in the marketing world. 10. “Web scraping, also known as web data extraction, is the process of retrieving or ‘scraping’ data from a website. Unlike the mundane, mind-numbing process of manually extracting data, web scraping uses intelligent automation to retrieve hundreds, millions, or even billions of data points from the internet’s seemingly endless frontier.” Furthermore, “A web scraper is a specialized tool designed to accurately and quickly extract data from a web page” (both quotes from scrapinghub, 2020). 11. Digitization is the process of converting information from a physical, analog format (for example, paper based medical records, or scientific articles in printed editions of a journal) into a digital format (Gartner IT, 2018). 12. See the discussion of this in relation to the 2016 U.S. presidential election and the use of big data (specifically, Facebook profiles) by the company Cambridge Analytica to individualize and better target political advertisements (Cadwalladr & Graham-Harrison, 2018; Detrow, 2018 ). 13. In 1932, the U.S. Public Health Service, working with the Tuskegee Institute, began a study to record the natural history of syphilis in hopes of justifying treatment programs for syphilis. Initially, 600 African American men were put in the study—399 with syphilis, 201 who did not have the disease. They were not aware that they were in the study or that the study was about tracking the progression of untreated syphilis. The men with syphilis received no treatment needed to cure their illness. Instead, they were told that they were being treated for “bad blood” and received free medical exams, meals, and burial insurance. The untreated men died, went blind, developed mental illness and other severe health conditions arising from having syphilis. The duration of the study was originally intended to be for a period of 6 months, but actually continued for some 40 years (https://www.cdc.gov/tuskegee/timeline.htm). 14. This example is explored further in supplemental Activity 2 at the end of this chapter.
3 DEVELOPING YOUR RESEARCH QUESTIONS1 PURPOSES AND GOALS OF THE CHAPTER This chapter focuses on the iterative process of developing the research questions that underpin your research design. We emphasize the importance of constantly focusing, and refocusing, your thinking about what your research problem actually is. Only when you have worked that out can you begin to think about the specific aspects of that problem that you will focus on in your study. These specific aspects will provide the basis for your research questions. We examine how the type of contribution to existing knowledge that you want your research to make affects what form those research questions take. The role of hypotheses, and the question of whether all research questions need to have hypotheses associated with them, is discussed. We highlight the importance of considering the feasibility of your proposed research questions given the resources you have available to do the study. There is little point in developing an excellent research question, the answer to which requires more resources than you have available to you. To demonstrate the type of thinking that sits in, and behind, the development of research problems and questions, we include an invited contribution written by Maxi Miciak and Christine Daum. This is an excellent reflexive account of how these researchers used iterative cycles of thinking to think big, plan big, but ultimately work out what it was that they really wanted to focus their research on, as well as what was feasible in terms of the resources available to them. The goals of the chapter are to • Establish what a research question is and the part that it plays in the research design process. • Explore different forms that research questions take, and why they do so. • Explore the development of research questions as an iterative reflexive process central to research design. • Provide practical examples of such research question development. • Explain how relevant research literature can be used when developing research questions. • Discuss the importance of thinking about the feasibility of a research question when designing research. 47
48  Research Design • Consider different forms that research questions can take. • Highlight that the form that a research question takes reflects the type of knowledge needed to address the purpose of the research. • Introduce the idea of inductive and deductive reasoning and how that impacts the way research questions are developed. • Establish that hypotheses are related to specific forms of research questions. BRINGING RESEARCH QUESTIONS INTO FOCUS All research is designed to address some sort of problem. No matter what type of problem your research focuses on, the overall goal of designing your research is to make an original contribution of some sort to the existing body of knowledge, or ways of thinking, about that problem. The initial idea for what your research problem might be about can come from a number of places. These include “observing and asking questions about your everyday activities . . . social and political issues . . . the literature on a topic, or from theory. These areas of course intersect” (Merriam & Tisdell, 2016, p. 76). There is “something” related to those everyday activities, social or political issues, literature, or theory that you have observed or read about that has caught your attention and that you want to know more about. This is the area, or broad topic, that your research will focus on. For example, the area, or broad initial topic, that your research will focus on might be unemployment and its effects on people, or it might be recruitment and retention of employees in organizations. You will notice that these topics are very wide and therefore will need to be focused more before it will be possible to decide on what you will research about them and how. This means that having decided on an area of interest such as unemployment, you will then need to think through what you want to know about the area and why. This is because although you have identified the area of interest that will be the focus of your research (unemployment), you do not yet have a research, or “researchable,” problem related to that area. There is a difference between a researchable problem and problems that affect your daily life. As Silverman (2001) notes, while social issues “like unemployment, homelessness and racism, are important; by themselves they cannot provide a researchable topic” (p. 5). Of course, this does not mean that a research problem cannot be related to a social issue such as unemployment. What it does mean, however, is that the social issue of unemployment is not in itself a researchable problem. Rather, the social issue of unemployment is a broad area of focus in which there are many possible research problems, each of which has many possible research questions attached. To decide which research problem and which research questions your research design will be developed to address, you will need to think through what aspect(s) of that social issue you want to know about, as well as why you want to know about that particular aspect or aspects. For example, a particular aspect about unemployment that you might want to know about is whether or not there is a link between where a person lives and whether they are unemployed. In this case, you might want to know how many people are unemployed in different socioeconomic areas of a city. Or is the aspect that you want to know about related to the social issue of unemployment and how long people are unemployed for after losing their jobs? Or why migrants are overrepresented in unemployment figures?
Chapter 3 • Developing Your Research Questions   49 Or people’s perspectives about, and experiences of, being unemployed? Once you have decided which aspect(s) of the topic you want to know more about, you can then think and make decisions about what type of information you will need to enable you to address the aspect of the topic you want to know more about. It is only at this point that you are ready to begin to think about the type of methods you will need to use in your research design to enable you to obtain that information. In the following box, we model the type of thinking needed to put this process of focusing your research topic on aspects of that topic and then developing research questions related to those aspects into practice. PUTTING IT INTO PRACTICE EXAMPLES OF THINKING THROUGH WHAT ASPECT(S) OF THE TOPIC THAT YOU WANT TO KNOW ABOUT, AS WELL AS WHY YOU WANT TO KNOW ABOUT THAT ASPECT/THOSE ASPECTS You have decided that you are interested in the area of long-term unemployment. For example, your interactions with people who are long-term unemployed may have highlighted that many people who cannot find employment are in fact both capable of working and want to work. Consequently, you decide that you want to know more about why people who want to work are not able to get jobs. In particular, you are interested in how and why this group of people is excluded from the employment marketplace. This exclusion becomes the focus of your research problem. You then think more about what exactly it is that you want to know about related to the problem of this exclusion. For example, is it that in a competitive labor market, experience is required for many jobs, so in effect people need to have had a job before they can get a job? Or is it that people become increasingly disillusioned by their experiences of being rejected when applying for jobs they are capable of doing? Once you have decided what it is that you want to know more about related to the problem of exclusion, then you can develop specific research questions designed to enable you to obtain a better understanding of what it is that you want to know about. Alternatively, you may be interested in the area of long-term unemployment because you have read studies that show that long-term unemployment causes serious health issues for both the unemployed person and their families. Therefore, you identify that what you want to know more about is health issues related to long-term unemployment. You will then need to think more about what exactly it is that you want to know more about related to these issues. Do you want to explore one or more specific issues that other studies have already identified such as depression or alcohol abuse, for example? Or do you want to better understand what long-term unemployed people in a particular situation of interest to you identify as health issues arising from that situation? Do you want to focus on the person who is unemployed? Their partners? Or children in families where all adults are long-term unemployed? Or all of the above? Once you have decided this, you will go on to develop specific research questions related to the specific research problem or area of interest you have identified related to your broad topic of interest. A research question is a question that a specific research study sets out to answer. The answer to a research question will provide a better understanding of an aspect of the research problem to which that research question is attached.
50  Research Design In addition to deciding upon what aspect(s) of your area or broad topic of interest (the social issue of unemployment) and what type of knowledge you want your research to contribute to understanding that area or broad topic, you need to ask yourself how your research will relate to what is already known about the area of interest. For example, will it build on what is known, will it replicate studies done in one context in another, will it pick up on recommendations for further research in the area, will it challenge existing knowledge in some way, or will it contribute new ways of looking at an aspect of unemployment that has been overlooked? All of these are possible contributions to what is known, but it likely will not be possible to address all of them in a single research study especially if you are a single researcher. What the discussion so far in this section shows is that thinking through and about your area of interest (e.g., the social issue of unemployment) enables you to identify what aspects of that area you want to know more about and why. Those aspects (e.g., unemployment and people capable of and willing to work not getting jobs, or long-term unemployment and its effects on those affected and their families) give rise to your research problem. Once you have worked out your research problem, then you can begin to think about what specific aspects of that problem you will focus on in your study. Those aspects will be the basis for the questions that your research is designed to address. Consequently, to bring your research question(s) into focus, you narrow the scope of your thinking by moving from a broad area to a problem related to that area and finally developing research questions related to that problem. The terms research area, research problem, and research question differ in terms of their level and scope of focus related to what will be studied. Although not all researchers use such a clear-cut distinction between these terms (for example, you will sometimes find research problem and research question used interchangeably), we think it is useful to distinguish between them in terms of the level and scope of focus that each relates to. Therefore, in this chapter we use • Research area to refer to the broad initial topic(s) or wide area(s) of interest giving rise to the research. • Research problem to refer to the part of that topic or area that we are interested in and will focus our research on. It is a problem because there is something about this issue or topic that we need to know more about. • Research question to refer to the specific aspects of that part of the topic or area that we are interested in. A research question is posed in such a way that it can be researched, and when addressed, provides information able to contribute to the body of knowledge relevant to the research problem and thereby provides better understanding of the research problem that the question is related to. TIP THE NEED FOR CLARITY When designing your research, you will need to make sure that you establish, and are consistent in the way that you are using, terms related to that design such as research area, research problem, and research questions.
Chapter 3 • Developing Your Research Questions   51 Feasibility Considerations Thinking about the specific research questions that will form the focus of your study will also require you to think about whether you have the resources to enable you to address these questions. Do you, for example, have sufficient time, expertise, and access to information or equipment or participants to be able to complete this research well? It is pointless coming up with what you consider a “perfect” research problem and set of associated research questions if addressing those questions is not feasible. For example, your research design is not feasible if you need 2,000 people in the study cohort to get enough statistical power in your study, but you have no funding or way to recruit so many people. The feasibility of a particular research design, that is, whether the research can be done within the time and resources that are available, must be taken into account from the outset of the research design process—including the development of the research questions. Not taking this into account may mean that the research may be limited by how much of the information needed to address those questions you actually are able to obtain. Such a limitation will mean that the significance2 of the research is reduced, or at worst, the entire research project must be abandoned and cannot proceed at all. Thinking, and being honest about the resources that you have available and whether they are sufficient for what you are proposing to do, is an important part of designing your research. Remember, if you are a student, then the resources that you usually will have are yourself and your time, the guidance from your advisor, and perhaps a small amount of funding. You need to take this into account when considering the scope of your research problem and questions. This may mean reducing the scope of your research problem and questions in some way. Consequently, Stake (2010) advises us to “[t]hink big, plan big, but do a small, well-contained study” (p. 78). How might you go about putting Stake’s advice into practice? We explore this question using the example of the thinking that was required to bring a student’s research problem and associated research questions into focus. PUTTING THE IDEA OF “THINK BIG, PLAN BIG, BUT DO A SMALL, WELL-CONTAINED STUDY”3 INTO PRACTICE A post-graduate student was interested in the broad area of the impact of higher education reforms on those working in the higher education sector. Her specific area of interest was the effect of the 2015 higher education reforms in Norway on employees in the Norwegian higher education sector.4 Here there is no doubt that the student has identified an area of interest and a draft research problem related to that area. This is the higher education area and reforms in that area and specifically those in Norway in 2015 and the effect that those reforms have had on employees in the higher education sector. There is also no doubt that some focusing of the research problem has already happened: the ands reflect that focusing. However, as the proposed research problem stands now, the student would not be able to proceed with the actual research. There is much more thinking work to do and decisions to be made before sufficiently focused and feasible research questions about that problem can emerge. A place for her to begin this focusing work is to ask herself what the key words are in the statement of her research problem. For example, she identifies one of the key words as effect. What has been the effect of the reforms? Thinking about this word it becomes
52  Research Design apparent to her that more thinking is needed about what this word actually means. What aspect of effect is she interested in, and why? Is she interested in a specific effect such as economic effect? Effect on morale? Effect on behavior? Effect on outcomes, and if so, what sort of outcomes? Or finding out what the people in the study say the effects are? Or are we talking about all of the above? Asking these questions of the draft research problem indicates that there needs to be more thinking about what exactly it is that she wants to know about in terms of “effect.” The thinking needed to unpack the word effect helps bring into focus what it is that she actually wants to know something about at the end of her study. This type of systematic and logical questioning will help focus and develop the research problem. In addition, it will help to make sure that the research is feasible—that is, that it can be done within the time and resources that are available. When producing the first version of the research problem, namely How did the 2015 higher education reforms in Norway affect employees in the Norwegian higher education sector in practice, the student had “thought big” and “planned big.” However, to study the effects of the entire reform on all employees in higher education, in the whole of Norway, is not feasible. By continuing to ask herself questions about all the key words in this draft research problem, she was able to develop a feasible and significant research problem, ask feasible and significant research questions about that problem, and do a small, well-defined study. In the following box, we capture this reflexive process. PUTTING IT INTO PRACTICE HOW IDENTIFYING KEY WORDS HELPED A STUDENT FOCUS AN INITIAL PROPOSED RESEARCH PROBLEM INTO A SMALL, WELL-DEFINED STUDY A student initially identifies her research problem to be How did the 2015 higher education reforms in Norway affect employees in the Norwegian higher education sector in practice? She identifies key words in that initial research problem statement in order to focus it further. She identifies the key words to be effects, employees, and the 2015 higher education reforms in Norway. She then asks herself questions about those key words: • • • What effect(s) is she interested in and why? For example, economic effects? Effects on morale? On behavior? On performance outcomes? Or all of the above? Which group of employees is she interested in knowing more about? For example administrators, managers, academics, or all employees? Which part of the reforms is she interested in knowing more about? One specific aspect of the reforms, or if more than one aspect, which ones? When these decisions about the focus and scope related to each key term of the initial research problem statement have been made, she continues to ask questions of those decisions: • • Why has she chosen those specific effects, employees, and aspects to study? What is it that she wants this research to contribute to the body of knowledge about this problem? In other words, what is the significance of her research?
Chapter 3 • Developing Your Research Questions   53 These decisions give rise to her research questions. Such iterative thinking through and about the topic of interest enables the student to develop a feasible and significant problem and associated research questions. It enabled her to capture a small part of a very large area and do a small, well-contained study. USING THE LITERATURE WHEN DEVELOPING RESEARCH QUESTIONS The process of iteratively thinking with, and about, the research problem, and the questions associated with it, that you wish your research to focus on, does not occur in isolation. Throughout this process of thinking, you will need to be aware of, and interact with, what is already known about the problem and how it is understood. In other words, you will need to join conversations in the body of knowledge that has been built up by the work of others in this area. Richards likens entering these ongoing conversations to “soaring over the landscape” (Richards, 2015, p. 20) in order to get an overview of what is “out there,” or already known about, the area or issue that has caught your attention and that you want your research to contribute to. Once you have got a sense of this landscape, you can make decisions about what parts of that landscape warrant a closer look and might form the basis for the problem that your research will be designed to address. Richards (2015) describes this process as “locating something small that can be captured” (p. 20) in that landscape. This locating of “something small” is the beginning of bringing your research problem into focus. Of course, as we have pointed out earlier, there is still much thinking to do about what aspects of this “something small” you are particularly interested in knowing more about and why. These aspects are likely to become the aims and research questions for your study, providing guidance and focus for your study. TIP SMALLER IN SCOPE AND FOCUS IS NOT NECESSARILY A BAD THING WHEN DESIGNING RESEARCH You may be worrying that focusing your research on something small will not result in a worthwhile, or “as good as,” contribution to the existing body of knowledge about your problem, as researching something big will. This is because you think that your research will be less significant. The flaw in this type of thinking is that “something big” and “something small” do not refer to the significance of the research. In fact, smaller in scope and focus actually means that the research has a much better chance of being completed and completed well, thereby making the findings of the smaller study more significant than a larger in scope study not able to be completed well. What Is Missing in the Existing Body of Knowledge in the Literature Related to Your Problem? One way to locate “something small that can be captured” (Richards, 2015, p. 20) is by exposing gaps in the existing body of knowledge about the particular area we are interested in researching. These gaps may then form the basis for our research problem and associated research questions. In such a scenario “[w]e read the literature, identify what is missing or mis-stated or even under-emphasized, and then formulate a question that links
54  Research Design these ‘gaps’ to our research interest” (Swaminathan & Mulvihill, 2017, p. 14). This is a strategy used by many researchers when developing their research questions. When developing research questions, building on or challenging existing knowledge in an area is important. As Wentzel (2018, pp. 29–30) reminds us, “there will always be areas where our understanding is no longer adequate, and the existing literature insufficient. . . . research always plays catch-up to the existing reality, and thus, new contributions are always needed.” He points out that research questions can arise when previous research is inadequate or insufficient because • Something has never been researched before, theoretically or empirically • Something has been researched before, but the body of knowledge is incomplete • Disagreement exists between scholars in the field of interest about what is going on • There is inconsistency in the findings in the area Caution: Mind the Gap Whether or not filling a gap is a useful and worthwhile strategy to undertake is related to the significance of the research. Will addressing a particular gap you have identified make a substantial contribution to the existing body of knowledge? Or could it simply be considered addressing a gap for the sake of addressing a gap? This type of reflexive questioning will force you to think about why a particular gap is something that needs to be known about and how that knowledge will be used. In other words, it forces you to think about the significance of addressing that gap. Further, there is a danger that only developing research problems and questions by addressing gaps in the knowledge that we already have about an area can result in a somewhat static, or bounded, view of what knowledge in that area is, or can be. This is because the purpose of the research becomes filling those gaps in that existing knowledge—not necessarily challenging or expanding the wider body of knowledge itself. Limiting possible problems and questions to such gaps can presuppose that the total body of existing knowledge is sound, true or a given (Wentzel, 2018). However, is that necessarily the case? Is everything in the body of knowledge related to the research area you are interested in “sound,” “true,” or a “given”? Asking this question can result in a different way of thinking about connecting your research problem and questions to the existing body of knowledge. For example, is this the only way that that area can be understood? Are there other ways of thinking with theory about the area? Can the area be studied in different ways methodologically? Beyond the Gap Rethinking the existing literature in this way opens up the possibility for different research problems and questions about your area of interest, which in turn prompts very different research questions than those focused on filling gaps in existing and dominant ways of thinking about that area (Alvesson & Sandberg, 2011, 2013). Consequently, the contribution of your research to the existing body of knowledge in the area might be to challenge, alter, or supplement that existing knowledge and thereby introduce new theoretical or empirical understandings related to that area of interest. Uncovering the often unspoken assumptions in existing bodies of knowledge about a problem or concept related to that problem involves thinking about and identifying in
Chapter 3 • Developing Your Research Questions   55 what way that problem or concept has been studied previously, and why researchers have chosen to study the problem in that way and not another. The way other researchers have investigated the problem you are interested in has been shaped by their assumptions about what that problem is, what can be known about that problem, and how we can gain or develop such knowledge. These assumptions have imposed limitations on what kind of knowledge that can emerge from their research. Perhaps other ways of investigating a subject matter could lead to a kind of knowledge that is currently missing in the area? Could it be that this missing knowledge will make an important contribution to the way that the problem is thought about and strategies are developed to address that problem? Activity Putting It All Together in a Sentence If you are in the process of designing research, try to tell a fellow student or a family member what it is that you are interested in learning more about by conducting the research using this sentence structure: I am studying [fill in your research area or broad topic of interest] to investigate [fill in your research questions] in order to [fill in what it is that you want the research to contribute to the body of knowledge about the research problem, i.e., what the significance of your research is]. If you find this hard to do, then you know there is more thinking to do before you have developed your research questions sufficiently enough so that you can make informed decisions about other aspects of your research design. These are aspects such as which method(s) to use to answer those research questions and why, and who you will collect data from (or about) in your research study and why. DIFFERENT FORMS OF REASONING AND HOW THEY SHAPE THE FORM THAT RESEARCH QUESTIONS TAKE Research questions can take different forms. The form that a research question takes is related to the goal or purposes of the research. The goal of some research is to test existing theories or ideas such as the older a long-term unemployed person is, the harder it is for that person to get back into the workforce. To test this theory will require us to employ a different type of reasoning than if our goal is to understand what the experience of older long-term unemployed people is when they interact with the labor workforce marketplace. In the first instance, we will need to employ deductive reasoning whereas in the second instance we will predominantly employ a type of inductive reasoning. TIP REMEMBER Different types of reasoning, and therefore different types of research designs, and the different forms of research questions that underpin those designs reflect different goals for doing the research.
56  Research Design Deductive Reasoning In research based on deductive reasoning, empirical evidence is collected and used to test an existing theory or a theoretically based assertion. The researcher begins by thinking about that theory, or aspects of that theory, and predicts what the empirical evidence, that is, the data collected, should show if that theory, or those aspects, are correct. The theory being tested guides the researcher’s thinking about what kinds of empirical evidence are relevant to look for. Thus, the research is deductive as it is designed to test empirically those predictions and therefore that theory. The goal of both the research, and the theory that it is designed to test in some way, is to generalize, using statistically based procedures, about “all instances or cases of a particular type” (Maxwell & Mittapalli, 2008, p. 877). For example, if you are studying effects of long-term unemployment, you might want to test the theory or empirical findings you have come across in the literature that suggest that the longer a person is unemployed, then the harder it is for them to reenter the workforce. In particular, you might have decided that what you are specifically interested in is whether that theory or finding applies or is correct across all age groups and all occupations. The form that your research questions take will reflect that decision. For example, if you want to know whether it is harder for long-term unemployed people in some age groups compared to other age groups to get back into the workforce, then you will develop a hypothesis related to your research question that will enable you to answer that question. A hypothesis is a testable statement about what we predict empirical evidence to reveal about a problem of interest in a specific situation in a specific context (and in the social sciences often for a specific group of people). For example, “It is harder for long-term unemployed people 50 years and over to get back into the workforce than it is for people aged 49 years and less.” This hypothesis focuses on the age of long-term unemployed and their ability to reenter the workforce, and tests if there is a relationship between them. “Age of long-term unemployed” and “ability to reenter the workforce” are examples of what are known as variables. A variable refers to whatever it is that your research is designed to measure. For example, a variable may be a characteristic of an individual such as age, weight, or height. Or, a variable might be level of achievement, attitude toward vaccine programs, or amount of exercise per week. In other words, a variable is something of interest that varies in relation to either the people or the places that you are studying. For example, if the variable of interest is “age,” the value for a specific individual in the study in relation to that variable could be “34.” For a different individual, the value of the variable “age” can, for example, be “50.” On the other hand, if the variable of interest in the study is “area of work,” this variable could have values such as “administration and business,” “health and well-being,” “technology,” “education,” and “hotel/tourism industry.” That a hypothesis is “testable” means that there must be statistical procedures available to analyze the empirical evidence a researcher obtains in order to demonstrate support or not support for the statement made in the hypothesis about those variables. For example, does the evidence support that being 50 or over and long-term unemployed makes it harder to reenter the workforce? Since the focus is on establishing the relation between those specific variables of interest, answers will be in the form of whether or not there is a relation (determined statistically) between those variables
Chapter 3 • Developing Your Research Questions   57 and if there is such a relation, what it is, and how significant it is (in terms of statistical significance). Thus, “[d]eveloping hypotheses involves . . . moving from the theory back to the sort of evidence supporting it” (Lewins, 1992, p. 48). This way of thinking and the reasoning that underpins it is deductive.5 Inductive Reasoning In research based on inductive reasoning, the goal of the research is to build theoretical and empirical concepts and understandings. Thus, inquiry based on inductive reasoning does not set out with a predetermined or a priori set of theoretical concepts to test. Instead, the researcher poses questions such as, What is going on here? What do people understand this to mean? How did it come to be perceived or understood in that way? Why that way and not another? Such “questions address the content of meaning, as articulated through social interaction and as mediated by culture” (Gubrium & Holstein, 1997, p. 14). For example, if you are studying effects of long-term unemployment, you might want to find out more about what it is like to be 50 and above, long-term unemployed, and finding it difficult to get a job. To do so you might ask this group of people who are long-term unemployed about their experience of trying to find employment. What is their perception of what happens when they apply for jobs? In their view why does this happen? What is the effect on them? Once again, the form that your research questions take should reflect the decision that you have made about what you want to find out about. Therefore, it is likely that you will use some form of open-ended research questions that enable the emergence of rich and qualitative descriptions and interpretations of the perceptions or experiences of people about a specific aspect(s) of the everyday context(s) in which they exist. Such an approach does not focus on specific variables identified in advance by the researcher as being the only ones of interest. Instead, the form that the research question(s) takes is designed to enable deeper and richer understandings about something that is happening in a particular social context to emerge. Open research questions are designed to build understandings of concepts and theories related to the focus of the research. Therefore, research questions will need to be formed in such a way that enables the answers to them to provide the type of knowledge needed to build those concepts and theories. This type of research question enables answers in the form of “thick interpretation” (Denzin, 2001, p. 52)6 of the in-depth descriptions or constructions of the understandings or experiences of people in their everyday social context(s) about what it is that the research is focused on. Thinking with and through this thick interpretation can develop existing, or provide possibilities for different, theoretical concepts and understandings related to the area and problem that the research is focused on. This way of thinking and the reasoning that underpins it is inductive. Therefore, such research questions will not be hypothesis driven. This is because the aim of the research is to “build, rather than to test concepts, hypotheses, and theories” (Merriam & Tisdell, 2016, p. 84). We will return to take a closer look at, and develop further, many of the points raised in this section in parts of Chapters 4, 5, 6, 7, 8, 9, and 10. There is still a lot more thinking to do about these matters and connections to make. The ideas that we have introduced here provide you with a good foundation for that discussion. For example, in Chapters 8 and 9, we include a discussion of how hypotheses can be developed from research questions and
58  Research Design then tested. In Chapters 6 and 7, we discuss how to develop open-ended questions and analyze the information you receive from them. TIP TO SUM UP The form that a research question takes arises from the goal of the research. Does the researcher aim to test existing theory or concepts derived from it that are related to the research problem that the researcher is interested in knowing more about (deductive)? Or is the researcher more interested in trying to build theoretical concepts and understandings about the area (inductive)? PUTTING ITERATIVE AND REFLEXIVE RESEARCH QUESTION DEVELOPMENT INTO PRACTICE—LEARNING FROM OTHERS At this point, you may feel a little overwhelmed at the prospect of developing your research problem and questions. How can you actually put the principles we have discussed into practice? This type of discussion often remains hidden in most textbooks and descriptions of the final form research designs take. Therefore, we miss information about what others have learnt when developing their research problem and questions. How did they transform their thinking about their initial research problem into research questions that guided their study? Such information can provide us with pathways to follow, dead ends to avoid, and experiences to learn from when we are designing our research and the questions that will underpin that research. Yet we miss insights into this process and hardly ever get the full story of what happened from which we can learn. Therefore, we decided to ask two researchers to share this information with us. At the time of writing this book we knew that researchers Maxi Miciak and Christine Daum had relatively recently completed excellent PhD studies that their principal advisor spoke very highly of. Therefore, we decided to ask them if they would be willing to tell their respective stories about how they undertook the process of developing and focusing their research questions and what really happened along the way. We made two key requests of Maxi and Christine. The first was that it must be the real story, not the sanitized version of the story where everything seems to fall into place effortlessly and the questions just seem, almost miraculously, to have appeared. The second request was that, if possible, they include a discussion of the various versions of their research questions and the reasons for the changes that they made to those questions along the way. We wanted them to expose the thinking that they did with, and about, those questions. Such thinking usually is not visible in our accounts of our research or our research design. They agreed to tell their stories. Each of them addressed our requests in their own way. The result is two unique analyses and overviews that enable us to get inside of their thinking when they were developing their research questions. This reflexive writing is a treasure trove of tips and inspiration for how to navigate this difficult and at times frustrating process. The next section of the chapter, “Scratching the Underbelly of Research Design: Developing Clear Research Question(s),” is their account of putting Stake’s (2010) advice to “[t]hink big, plan big, but, do a small, well-contained study” (p. 78) into practice.
Chapter 3 • Developing Your Research Questions   59 SCRATCHING THE UNDERBELLY OF RESEARCH DESIGN: DEVELOPING CLEAR RESEARCH QUESTION(S) Reflections by Maxi Miciak and Christine Daum7 A Bit About Us, Our Projects, and What Makes Us “Qualified” to Write This Research always occurs within context, and the researcher is a part of that context. Given that we are writing about research, we feel the same rule applies. Therefore, before diving into speaking about research questions, we, Maxi Miciak and Christine Daum, thought it appropriate to introduce ourselves. Maxi is a physical therapist who spent a good portion of her clinical career working with people managing chronic pain. After almost 14 years in clinical practice, she began doctoral studies in rehabilitation science to research a significantly understudied phenomenon in physiotherapy—the therapeutic relationship between the therapist and patient. She did a qualitative study using interpretive description as her methodological orientation8,9 because she wanted to provide clinicians practical guidance when navigating relationships with patients. To do this, the foundational components of the therapeutic relationship specific to physiotherapy needed to be identified and described. Her final research question was What are the conceptual descriptions regarding the components of the therapeutic relationship in physiotherapy? Maxi is currently a postdoctoral fellow studying research impact assessment, that is, the assessment of the influence research has on policies and practices and further downstream to health outcomes and social and economic impacts. Her research focuses on operationalizing patient-centered care by developing, implementing, and evaluating care models that impact the patient–practitioner therapeutic relationship, including how health services and policies support this relationship. Christine is an occupational therapist with an interest in aging. She has spent most of her clinical career working with older adults in community and institutional settings. During this time, she observed the impact of environments, including neighborhoods, on older adults’ function. Christine pursued a master of science in health promotion and a PhD in rehabilitation science to better understand the relationships between the environments in which older adults are situated and their performance of daily activities—those activities that people engage in to look after themselves, enjoy life, and contribute to society. The purpose of her research project was to explore the daily activities of older women living in inner city (disadvantaged) neighborhoods. Her research question was How do the neighorhoods of inner city-dwelling older women influence their daily activities? Like Maxi, she used qualitative methods and interpretive description to explore her research question. Christine is currently the coordinator of a transdisciplinary research program focused on creating, deploying, and evaluating technologies to support older adults’ mental and cognitive health. Drawing on our experiences, we understand that the simplicity and neatness of a well-crafted research question often mask a process that can be daunting, exhilarating, and at times, downright depressing! We often felt unworthy going through this process; the term fraud was not far from our thoughts. However, we both feel qualified to write about the “underbelly” of developing research questions because we have lived, learned, and supported one another along the way. Also, we now mentor students and vicariously relive the experience with more wisdom, patience, and practical suggestions. It is important to note that we had different experiences coming to our questions, as we describe below, but we each eventually got to a question that set the stage for our
60  Research Design projects. And although different, we both experienced the process of exploring, generating, and landing on the research question as a necessary struggle deserving of more time and thought than is often acknowledged. We also recognize that even though we each landed on a single research question, you may have more than one question depending on the complexity of the phenomenon you are studying. Respecting this, we will use the term question(s) to represent the breadth of research possibilities. In the Beginning There Was . . . You have a brilliant idea that is important for advancing your field—everyone has told you so. You have been talking about it, maybe for years. You may have even had visions of future fame, jet-setting to conferences as the keynote and preeminent mind that was brilliant enough to see and meaningfully address this issue. Then, you are rudely awakened from your daydream by your committee asking, “What is your research question(s)?” You open your mouth. And then, the nightmare—words don’t come out—they are not even formed in your brain let alone making it to your throat and out of your mouth. You know it is there, but it’s a jumbled mess that you can’t seem to untangle. It is like grasping smoke—you see it but it is not solid, and you can’t hold onto it. And you are confused and even a bit ashamed that you assumed it was formed and right there to be plucked. You painfully stumble through articulations that don’t quite fit, stabbing at air, as the committee members look at you, unblinking, waiting. If you are like us, after experiences such as the one described above, you will go through days of doubt, wondering whether you actually ever really knew the research issue and if it is, in fact, relevant. Because if you did know the issue and its importance, then surely the question would slide off of your tongue smoothly and articulately. However, we want to share the hard lesson we learned: It does not work this way. Much more thinking and often frustrating work is required to get the clarity you need. Generating the Question(s) Shocked into reality, you now must begin the process (and we repeat, process) of generating and regenerating your research question(s). And as you might have guessed, there is no one way to begin or complete a process influenced by various factors such as previous research experience, existing knowledge, and even your supervisor’s methodological leanings. The question(s) exist in a context. For instance, Maxi needed to consider the current state of the physiotherapy profession, including practice models within physiotherapy and across different health care professions, as well as the potential to implement the findings of her study in clinical research and practice. An example of a choice made from considering her context came in the language she used in the question. The term components was used versus a term such as dimensions. While the latter term may have resonated with researchers, components would likely ring truer with clinicians and translate better into practice. In addition, Maxi’s process was affected by the fact that she had no previous research experience, so was unaware of the challenges that lay ahead. Her supervisor had not used qualitative methods, having exclusively used quantitative methods, so naturally they both assumed she would be doing a quantitative study. However, when her doctoral committee pressed her to articulate her interests, and specifically, what really needed to be known, it was clear that a qualitative, not a quantitative, question was needed. Maxi learned a key lesson—methodology and methods follow the question(s), not the other way around. The series of iterations of the questions and the reason for the change is captured in part in Table 3.1.
• What is the construct and content validity of a self-report measurement tool specifically designed to assess the quality of therapeutic relationship in physical therapy? What are the common themes regarding the definition of positive therapeutic relationship in Edmonton private practice clinics that arise from (a) adult clients’ descriptions of receiving physical therapy treatment for musculoskeletal injuries and (b) physical therapists’ descriptions of providing treatment to clients with musculoskeletal injuries? What are the common themes regarding the components of positive therapeutic relationship in Edmonton private practice clinics that arise from (a) adult clients’ descriptions of receiving physical therapy treatment for musculoskeletal injuries and (b) physical therapists’ descriptions of providing treatment to clients with musculoskeletal injuries? • • • A preliminary literature search that revealed minimal and disjointed research on the conceptual understanding of the therapeutic relationship in physical therapy (i.e., What is it?) My insights from clinical experience as a physiotherapist This quantitative question(s) was added. However, feasibility considerations limited my ability to fully assess the psychometric properties of a tool because I was now answering qualitative questions as well. I decided to limit the assessment to construct and content validity. I had not decided which tools I would be assessing. I chose the term components over terms such as dimension and elements because it captured a practical perspective important for the overarching purpose of the study—to provide clinicians with knowledge for informing decision-making. ◦ ◦ Qualitative methodology was required to explore and answer my questions surrounding the knowledge base of the therapeutic relationship. My committee members and supervisor probed me about “what needs to be known?” in the discipline. My responses to their probes stemmed from the following: My supervisor’s primary methodological lens was quantitative. We assumed the therapeutic relationship construct in psychotherapy is transferrable to physical therapy. • • • Mid-stage of proposal development • • Assessing clinical outcomes requires valid and reliable measurement tools, and there was no tool in physical therapy to assess the quality of the therapeutic relationship. • • Pre-formal proposal writing No question(s) had been formally constructed but preliminary thoughts were that it would address assessing the psychometric properties of established therapeutic relationship measurement tools from psychotherapy in the physical therapy context. Changes and the Rationale Question(s) The Evolution of Maxi’s Research Question(s) Timeline TABLE 3.1 ■ Chapter 3 Developing Your Research Questions   61
• Dissertation What are the conceptual descriptions regarding the components of the therapeutic relationship? • • • My supervisors and I agreed that I could keep the question broad so as not to impose limitations on the nature of the findings and their future transferability to other populations and settings. The objective of this research project was to identify and provide in-depth descriptions of the key components of the therapeutic relationship relevant to physical therapists and adult patients managing musculoskeletal conditions in private practice clinics. The question became very focused, with the population and setting (physical therapists and patients managing musculoskeletal conditions in private practice clinics) written into the study objective: I dropped the quantitative question for two reasons: 1. My committee members and I became increasingly aware that the qualitative study would be quite involved. Therefore, it would not be feasible within a doctoral project to address all questions. 2. We secured a qualitative methodologist as a co-supervisor, given my supervisor had no experience with qualitative methods. This, combined with feasibility considerations, allowed us to “let go” of the quantitative question(s). My language when formulating the questions began to change. By this point, I had decided interpretive description would be the appropriate methodological approach to respond to the qualitative questions, so I began integrating language consistent with the method (i.e., “conceptual descriptions” versus “common themes”). I also revised language around patient characteristics to make it broader and to hold it open (e.g., “conditions” versus “injuries”). • • Final proposal What are the conceptual descriptions regarding the components of the therapeutic relationship in physical therapy arising from (a) adult clients’ descriptions of receiving physical therapy treatment for functional limitations due to musculoskeletal conditions and (b) physical therapists’ descriptions of providing treatment to clients with musculoskeletal conditions? Changes and the Rationale Question(s) Timeline 62  Research Design
Chapter 3 • Developing Your Research Questions   63 Christine, on the other hand, worked as a research assistant on several projects and teams that used qualitative and mixed methods. She witnessed the teams untangle the web of decisions that underpinned the development of research questions. As such, Christine expected that she might encounter similar struggles and thus devoted 3 weeks to interrogating assumptions that surrounded the research issue she was exploring. As the area of interest (that is, the influence of inner-city neighborhoods on older women’s daily activities) could be explored from several disciplines and subdisciplines (occupational therapy, occupational science, environmental gerontology, geographical gerontology, human geography, to name just a few), she first needed to understand the assumptions embedded in each discipline and how these would shape the research question. She filled several whiteboards with potential questions (and components of questions), brainstorming words and ideas to try to get at what she wanted to study. Christine then disassembled the questions into their components and substituted words and concepts with others. She examined each of these, eliminating those that truly were not what she desired to study and rewriting others that seemed closer to her intent. For Christine, articulating a question was a lengthy process that involved being judicious about what she did not propose to study just as much as it was about generating a clear research question. The series of iterations of the question is captured in part in Figure 3.1. FIGURE 3.1 ■ The Evolution of Christine’s Research Question What are the realities of daily life of older women who live in disadvantaged neighborhoods? What physical and social features of disadvantaged neighborhoods affect older women's community activities? How do the physical and social features of inner-city neighborhoods shape older women's everyday activities? How do the neighborhoods of inner-city-dwelling older women impact their daily activities? Embracing Rather Than Running From Critique Central to the processes we have described in the development of our question(s) was a process of reflexively critiquing our own question(s). Asking question(s) derived from different viewpoints or emphases will challenge you to consider your question(s) from different angles: • Who will want to know about your study? • What are the assumptions that (will) underlie the question? • What question would your key stakeholder(s) ask? • How would the question sound to X (e.g., patients) versus Y (e.g., therapists)? • What do your key stakeholders need to know?
64  Research Design • Can this question be answered in this context? • Who (e.g., target population, discipline) would be the harshest critics of your question? What would they say? Would it matter what they say (i.e., would you revise your question in light of their feedback)? These questions can help greatly in the refinement of your research question(s) and address the issue that we call illusions of clarity, meaning you believe you have generated clear question(s), until you unveil it to different audiences. For example, the instructor in your research design class, who critiques your question(s) with more questions, may be the first to reveal the illusion. Revising the question(s) from this feedback, you believe you are set; no more holes could possibly be poked in the redeveloped question(s). So now you are ready for the “big reveal” to your supervisor. The result? You again doubt whether you know what you are studying and realize that there are many more questions that could be asked and answered. Humbled yet resilient, you integrate your supervisor’s feedback to finalize what you are sure are “the question(s)” (your supervisor is on board so who is going to object?), only to hand off to a supervisory committee that has content and context knowledge that likely influences the way the question you have come up with is viewed. We could go on, but we think you get the point. Different audiences will provide critical feedback that, although devastating in the moment, will ultimately result in a question(s) that is more potent than it was to start. Landing on a Question(s) Ultimately, you will have to eventually “land” on a question(s). There is no such thing as a perfect or finite question(s). However, your question(s) must be clear enough to move forward, in the right direction, to inform the rest of the design planning. This carefully considered question(s) must be justifiable. You must be able to articulate why the question(s) is worthwhile answering, that it can indeed be answered, how it differs than what has already been explored in the discipline, and how it potentially fills a knowledge gap. The study required to answer the question(s) must also be feasible. In other words, it must be within your capabilities, financial resources, and time span available. The nature of the question(s) may bring up other issues of feasibility such as relationships with communities and buy-in from stakeholders. For example, although conducting research within a specific marginalized community for the purpose of changing policy is admirable, it may not be realistic for a researcher who has no existing relationships with the community nor with policy influencers and has only one year to complete the study. Landing on a question(s) does not mean it is written in stone. There is a certain degree of arrogance in assuming this process is sterile, requiring only one pass through the gate. As you hash out the rest of the design, you will circle back to see if your question(s) accurately reflects your design thinking, and vice versa. It is this iterative process in both question formation and research design that is key to a well-designed, coherent research project. Key Messages • Never underestimate the importance of a clear question(s). A clear, congruent research question(s) is essential to the integrity of the research project and sets the direction for the rest of the design planning.
Chapter 3 • Developing Your Research Questions   65 • There are no perfect or finite questions. Developing a research question is a dynamic iterative process that involves interrogating assumptions and positions. • Researchers should expect and welcome the struggles that come with landing on a feasible and justifiable question. • When you get lost and forget what you are studying (and you will), your research question is the touchstone, the North Star, that will get you back on track. End of invited contribution. CONCLUSIONS Like the development of the overall research design itself, developing research problems and questions is an iterative process requiring you to think reflexively. This reflexive process helps you refine your level of focus as you think through your research area and work out what the problem in this area is that your research will focus on, and why. Once you have decided this you are able to develop research questions, which will provide you with the type of knowledge that you will need to find out what you hope to know something about at the end of the research undertaking. To arrive at a good research question, you have to make a lot of decisions about your research design and the various iterations of the research questions which form part of that design. There is no recipe to follow, or one size fits all, for “how to” develop a research problem or formulate research questions related to it. It is impossible to give hard and fast rules about, for example, how many research questions are needed, what form they should take, and how they should be written. This is because research questions, as well as any “rules” about them, make no sense if they are detached from the thinking about what the study is trying to find out about and why. Nevertheless, although there is no “recipe” to follow, or one-size-fits-all research question template to fill in, there are some key questions to ask yourself, and think about, that can help you focus your research problem and related questions. They include the following: 1. What is the broad area (could be empirical, substantive, theoretical, or all of the above) that your research is located in? 2. What is it about this area that you are interested in—what is the problem? 3. What is it that you are specifically interested in in relation to this problem and why? 4. What contribution do you want your research to make, and to whom, at the end of your study? 5. Do you have the time and resources to be able to do this study? In other words, is it feasible? 6. Is this a problem that should be studied at all? If so, then are these the questions that should be asked about it? In other words, have you asked questions of your research questions in terms of the ethical considerations that we discussed in Chapter 2 and which need to be thought through, in all areas of the research design, including the questions asked about any problem?
66  Research Design Asking questions such as these is a key part of developing a well thought-out, focused, ethical, feasible, and responsible research design. It forces you to ask yourself about the assumptions that you bring with you to the research design table. This includes assumptions about what a “research” problem is, or the form that a “good” research question must take or be in. Thinking through, and asking questions of those assumptions, in turn forces you to think about how and why they affect what type of problems you will focus your research on, and the type of questions you will ask. What do these assumptions enable you to ask questions about? What other questions, and types of questions, might be asked? What do you gain or lose from choosing the question(s) you have decided on? How does your choice of a certain type of research problem or questions impact on the way that you design your research—for example, the type of methods that you will use to obtain information to address those questions? Choices about what the problem and associated questions that your research is being designed to address will affect the way you think about, and design, all parts of your research. For example, what methodological approach and associated methods you choose to use in your research design will be based on what type of knowledge, and in what form, will be needed to answer or address those questions. We return to this point to discuss it more fully in the next chapter. SUMMARY OF KEY POINTS • Developing your research problem and questions when designing research is an iterative, nonlinear process. • You will use empirical and theoretical literature to assist in the identification or development of your research problem and questions. • Theoretical, methodological, and ethical considerations shape the nature, focus, and form of the questions related to that problem. • A research question is posed in such a way that it can be researched, and when addressed provides information able to contribute to the body of knowledge relevant to the research problem that the question is related to. • The form that a research question takes affects other parts of the research design such as what methods are proposed to be used in that design and why. • Developing research questions requires reflexivity on the part of the researcher to recognize and acknowledge the impact of the underlying assumptions that shape their perspectives and understandings of what a research question is, and the form that that question must take. • We can learn much from the experience of others—how exactly did they develop their research questions and why? KEY RESEARCH-RELATED TERMS INTRODUCED IN THIS CHAPTER deductive reasoning feasibility hypothesis inductive reasoning
Chapter 3 research area research problem • Developing Your Research Questions   67 research question variable SUPPLEMENTAL ACTIVITIES 1. If you are in the process of designing your own research, think through the following questions that will enable you to develop a small well-defined and feasible study: • What is the broad area (could be empirical, substantive, theoretical, or all of the above) that your research is located in? • What is it about this area that you are interested in—what is the problem? • What is it that you are specifically interested in in relation to this problem and why? • What contribution do you want your research to make, and to whom, at the end of your study? • Do you have the time and resources to be able to do this study? In other words, is it feasible? • Is this a problem that should be studied at all? If so, then are these the questions that should be asked about it? In other words, have you asked questions of your research questions in terms of the ethical considerations that we discussed in Chapter 2 and which need to be thought through, in all areas of the research design, including the questions asked about any problem? If you manage to address all of these questions, then you are well on your way to being able to develop a well thought through research design. You are also ready to make decisions about how you will do the research. 2. After reading this chapter, discuss with your fellow students or researchers why it is important that, when designing research, the research question(s) drives the choice of methods rather than your choice of methods driving the research question(s) you ask. FURTHER READINGS Creswell, J. W., & Creswell, J. D. (2018) Research questions and hypotheses. In J. W. Creswell & J. D. Creswell, Research design. Qualitative, Quantitative and Mixed Methods Approaches (5th ed., pp. 133-146). SAGE. Flick, U. (2020) From Research Idea to Research Question. In U. Flick, Introducing Research Methodology (3rd ed., pp. 61-77). SAGE. Merriam, S. B., & Tisdell, E. J. (2016) Designing Your Study and Selecting a Sample. In S. B. Merriam & E. J. Tisdell, Qualitative Research: A Guide to Design and Implementation (4th ed., pp. 73-103). Jossey-Bass. NOTES 1. This chapter includes the following invited contribution: Miciak, M., & Daum, C. (2023). Scratching the underbelly of research design: Developing clear research question(s). In Cheek, J., & Øby, E. (2023). Research Design: Why Thinking About Design Matters (pp. 59-65). SAGE.
68  Research Design 2. Depending on the type of research problem, questions, and methodology used in the research, significance might mean statistical significance, or significance in terms of the contribution that the study makes to the existing knowledge in the research area, or both. See the discussion in Chapter 5. 3. From Stake (2010, p. 78). 4. This student was one of the authors (Elise Øby) who was returning to postgraduate study and undertaking a master’s study in social sciences after having completed her PhD in mathematics some years prior. 5. Chapter 8 returns to the idea of hypotheses to explore in more detail how to develop a hypothesis and then test it. 6. Clifford Geertz (1973) is largely credited with the introduction of the term thick description into qualitative research. His work builds on Ryle’s (1971) philosophical term. Geertz’s original conception of thick description, which was anthropologically based, was more descriptive in orientation than contemporary uses of the term, which have emphasized an interpretive dimension to this description—for example Denzin’s (1989a) idea of thick interpretation (see Ponterotto, 2006). 7. This is an invited contribution: Miciak, M., & Daum, C. (2023). Scratching the underbelly of research design: Developing clear research question(s). In Cheek, J., & Øby, E., Research design: Why thinking about design matters (pp. 59–65). SAGE. 8. Thorne, S. (2008) Interpretive description. Left Coast Press. 9. Thorne, S., Oliffe, J., Kimsing, C., Hislop, T. G., Stajduhar, K., Harris, S. R., Armstrong, E. A., & Oglov, V. (2010). Helpful communications during the diagnostic period: An interpretive description of patient preferences. European Journal of Cancer Care, 19, 746–754. https://doi.org/10.1111/j.1365-2354.2009.01125.x
4 WHY METHODOLOGY MATTERS WHEN DESIGNING RESEARCH PURPOSES AND GOALS OF THE CHAPTER In Chapter 1, we introduced the idea of methodological thinking as shaping the form that a research design takes. Methodology was defined as “the strategy, plan of action, process or design lying behind the choice and use of particular methods and linking the choice and use of methods to the desired outcomes” (Crotty, 1998, p. 3) of a research design or study. From the discussion that we have had in Chapters 2 and 3, you will now understand that the desired outcomes of any research design or study relate to answering the research questions we have identified as central to that study and doing so in a way that is both credible and ethical. Consequently, choosing which methods to use in our study involves thinking about what type of data our chosen methods will enable us to obtain, and whether or not that data will enable us to address our research questions. It also involves thinking about how we will put those methods into practice. When we think in this way, we are thinking methodologically. When we design our research, we will need to think methodologically and make justifiable decisions, about what methods will form part of our design. Part of the credibility of our research lies in us being able to justify how the methods we have chosen to use will enable us to obtain the knowledge that we need to answer our research questions. Given this, the purpose of the chapter is to explore how methodological thinking guides, and provides the rationale for, the decisions we make when designing our research. Decisions such as what type of knowledge we will need to answer our research questions, how we can develop that knowledge, the form of data we will need to do so, which methods will enable us to obtain that form of data, and how we will analyze and interpret that data. We explore ways that our thinking about methodology and methods is influenced by different onto-epistemological assumptions derived from different schools of philosophical thought. This includes the way that we think about what type of methods, and the knowledge produced by them, can be considered scientific. The idea of inquiry paradigms is used as the organizing construct for much of this exploration. We demonstrate how the paradigmatic stance adopted by a researcher guides that researcher’s thinking about matters such as whether the social world can be studied in the same way that the natural and physical sciences are, and what valid or trustworthy research findings are. As you are reading the chapter, we encourage you to think reflexively about the methodological assumptions you make when thinking about the design of your research. What view of what science and research are, are they based on? How do these assumptions, declared or otherwise, affect methods related decisions you make about that design? 69
70  Research Design The goals of this chapter are to • Establish that the way we design our research is shaped by methodological thinking which provides the rationale for the decisions we make related to the methods used in our research. • Highlight that our view of what data is, and is for, reflects our methodological thinking and our beliefs about what science and research are, and are for. • Introduce the idea of paradigms and paradigmatic stances as sets of basic beliefs that guide our thinking when designing research. • Highlight the role that ontology and epistemology play when thinking about paradigms and paradigmatic stance. • Explore the idea of an inquiry paradigm as providing an organizing construct for understanding how onto-epistemological derived thinking impacts how research is thought about and therefore designed. • Illustrate the effect that onto-epistemological assumptions embedded in three different inquiry paradigms—positivism, post-positivism, and constructivism— have on our methodological thinking when we are designing our research. • Demonstrate how our thinking about, and therefore understandings of, quality and credibility of research are affected by the inquiry paradigm in which our research is embedded. • Emphasize the importance of thinking about whether the choices we make in our research design related to methodology and methods are consistent with each other, as well as with the research questions we are seeking to address. • Provide examples of how to think through, and therefore become aware of, the methodological assumptions we bring with us to the research design table and why this matters. THINKING METHODOLOGICALLY Once we have identified what we want to know more about and why, we are ready to focus our thinking on the way that we will do our research. This will require us to think about what type of information we will need to address the issues, questions, or hypotheses that our research is focused on, and how we might obtain it. For example, if our general area of interest is how campus lockdown affects students during the COVID-19 pandemic, we might choose to study how that lockdown has impacted on those students’ motivation. When doing so, we are interested in finding out if there is any quantifiable link between having only digital classes, a lack of social interaction as the result of having only digital classes, and students’ motivation. To do this, we will need to collect numerical information in a form and amount that will enable us to make those links in a credible way when analyzing that numerical information. Having decided this (a methodological decision), we are now able to decide what methods we can include in our research design that will enable us to collect that form and amount of numerical information in this quantitative study.
Chapter 4 • Why Methodology Matters When Designing Research   71 On the other hand, if we choose to study what it was like to be a student during a period of campus lockdown, then we will be interested in finding out how students experienced campus lockdown and why. To do this, we will need to collect information about that experience by, for example, interviewing students whose campus has been locked down. When doing so, we will need to consider whether the words that make up these interviews enable us to develop in-depth descriptions of these students’ perceptions and understandings of not being allowed to attend classes or campus physically. Whether or not these words can do this in a credible way in this qualitative study depends on how the interviews are designed, conducted, and analyzed. What these examples illustrate is that we need to have decided what form of data will address the issues, questions, or hypotheses that our research is focused on, and why. Until we decide that, we are not ready to begin thinking about how we can collect our data. For example, will our research need to take a qualitative or quantitative approach? Once we have decided that, we can then begin to think about which specific methods within a quantitative or a qualitative approach will enable us to gain the form of data we need. When we think in this way, we are thinking methodologically. Methodology is “the strategy, plan of action, process or design lying behind the choice of particular methods and linking the choice and use of methods to the desired outcomes” (Crotty, 1998, p. 3). The shape that a research design takes is the result of these methodological decisions. Neither our choice of methods, or the data they produce, can be understood detached from the methodological thinking that underpins them. We will begin our exploration of how methodological thinking guides, and shapes, the form that our research design takes by taking a closer look at the idea of data.1 To do so we pose a series of questions about data that will need to be thought through, and decisions made about, when designing your research. These include the following: What is data? What types of data are there? What type of data will I need to enable me to address the problems or questions that my research is designed to answer? How do I decide this? DATA: A CONCEPT SHAPED BY METHODOLOGICAL ASSUMPTIONS Despite data being a word that appears in most discussions of research design, and reports of research, it is a word that we tend to use without much thought (Koro-Ljungberg, 2016). Data is also a word that, while used a lot, is not often clearly defined. It is as if everyone shares a common understanding of what data is and is for. However, this is not the case. There is considerable variation in the way that data is thought about, and therefore defined. Table 4.1 provides examples of this variation. What does the variation in the definitions in Table 4.1 reveal about the way data is thought about and viewed? One thing evident from these definitions is that data is not just about the numbers or the words collected by a researcher. Data is much more than that. It is about the way that parts of reality are reduced or “chunked” into manageable units (Bernard et al., 2017) able to be analyzed and interpreted to produce the findings or results of our study. What is “chunked” (i.e., what is considered data and why), and how it is “chunked” (i.e., what methods are used or not used to collect and analyze that
72  Research Design TABLE 4.1 ■ Different Definitions and Views of What Data Is Author(s) Definition of Data Black (1999, p. 25) Appears in many forms—“[f]or example, some components may be numerical, while others may be in the form of transcripts of interviews.” Bernard et al. (2017, p. 5) Data is created “by chunking experiences into recordable units.” Schostak & Schostak (2008, p. 91) “a matter of seeing” Sirkin (2006, p. 590) “All the information we use to verify a hypothesis” Polkinghorne (2005, p. 138) “accounts gathered by qualitative researchers” Black (1999); Synonymous with “set of scores” from “measuring instruments such as tests, questionnaires, and interview or observation schedules” (Black, 1999, p. 24) Bors (2018) data, and why) depends on the way that the idea of data itself is viewed and thought about. For example, as a set of numerical scores or a form of transcripts of interviews (Black, 1999). It also depends on why the researcher is collecting the data in the first place—the purpose of the research. For example, will the data be used to verify a hypothesis, or will it be used to build understandings of the experiences and perceptions of individuals in their research about a topic or issue of interest? The numbers or words that our research produces are a form of raw data that in itself has “little meaning and must be turned into understandable information” (Black, 1999, p. 25). This is an important point. It highlights that just generating long lists of numbers or hundreds of pages of words of the transcripts of interviews will not in themselves provide the answers to our research questions. Whether or not these words and numbers can provide the basis for addressing the issue or problem that our research is being designed to address depends on a number of interconnected methodological considerations. Understanding these connections will also enable the production of appropriate forms of data to address our research questions. For example, we will need to consider the type of data needed in order to be able to address the questions that guide our research and why. As we saw in Table 4.1, data can be viewed as both “[a]ll the information we use to verify a hypothesis” and “accounts gathered by qualitative researchers.” Which of those forms that data takes in a specific research study is a methodological decision based on what we will need to know about in order to address our research questions and what methods we can use to generate, collect, and analyze that specific type of data. Thus, data is about the way that meaning is produced and verified (Crotty, 1998; Black 1999)—methodological considerations. The Importance of Bringing Methodological Considerations Related to Data Into Focus The methodological thinking that underpins the way that we view data itself—what it is, and why we consider one form of data more appropriate than another—remains invisible in most discussions of research design or the research produced by that design. Instead,
Chapter 4 • Why Methodology Matters When Designing Research   73 the data related focus of those discussions tends to be on “getting” data. For example, how to “get it,” and what to do when we have “got” it. This is data as a self-contained “thing”to-be-got type of thinking. The danger with fast forwarding our thinking to focus on how to “get data,” and therefore skipping over what we understand data to be, is that we do not see data for what it is. Data is not just about the numbers or the words collected by a researcher. Nor is it a thing. Rather it is a concept that contains many (usually undeclared) assumptions. These are assumptions related to, for example, the form that data can (even must) take in order to be considered scientific, or the type of data that is “stronger” or “more” scientific. Therefore, it is important to be aware of our assumptions about what data is, and is for, when we design our research. The Activity box below provides a way of testing your own assumptions and beliefs about what data is and is for. Activity Self-Test: Are You Taking the Idea of Data Itself for Granted? Is data a word that you tend to use without much thought? Taking this self-test will help you answer this question. Reflexively thinking about the following questions provides you with insights into the beliefs you bring with you when designing your research. 1. When you read the term data in a research report, or think about what type of data you want your research design to produce, what do you understand data to mean? 2. What do you base that understanding on? 3. Do you think about if there are other ways that data can be thought about besides the way you think about it, or the way that it is presented in a particular research design or report? 4. If you did allow for the fact that different people may have different ideas about what data is (and is not), then did you consider how different ways of thinking about what data is affect the way we think about, design, and report research? 5. Have you been assuming that everyone else views data in the same way as you do? If so, why? 6. Or perhaps you didn’t really think about any of this at all, or consider it a part of the thinking that you will need to do when designing your research. If so, why not? To sum up, no matter how data are collected, even in what might seem to be the most objective forms of research, data does not stand alone completely removed from the researcher. In any type of research design, it is the researcher who decides what type of numbers or words will be collected and called data, how that collection will take place, and how that data will be analyzed. These are subjective decisions made by all researchers based on what they believe “science,” “research,” and “data” are. These decisions
74  Research Design are based on what Guba (1990) referred to as the “basic set of beliefs” (p. 17) held by a researcher. This set of beliefs provides what is known as a worldview or a paradigm that guides the thinking of, and therefore the decisions made by, researchers when they are designing research (Guba & Lincoln, 1994). This includes what a researcher might decide to study, how they will study it, and the way that they interpret and make conclusions about what they have studied. These are methodological decisions. In the next section of the chapter, we take a closer look at the idea of paradigms and how they affect the methodological decisions you make when you are designing your research— whether you realize it or not. PARADIGMS: SETS OF BASIC BELIEFS THAT GUIDE METHODOLOGICAL THINKING What is a paradigm? A paradigm is “a set of basic beliefs. . . . It represents a worldview that defines, for its holder, the nature of the ‘world,’ the individual’s place in it, and the range of possible relationships to that world and its parts” (Guba & Lincoln, 1994, p. 107). This worldview shapes the cascade of methodological decisions that organize and give shape to all parts of your research design. For instance, a researcher might hold the view that the principal point of difference between gaining knowledge in scientific and non-scientific ways is the alleged objectivity of scientific knowledge. It is unlike the subjective understandings we come to hold. Those subjective understandings may be of very great importance in our lives but they constitute an essentially different kind of knowledge from scientifically established facts. (Crotty, 1998, p. 27) This set of basic beliefs, or paradigmatic stance, of the researcher about what scientific knowledge is affects what that researcher can view as valid ways of producing that scientific research—their research methods. For example, if subjective understandings are not considered to be scientific knowledge, then findings of research about people’s perceptions or experiences of a topic of interest, that is, their subjective understandings, will not be able to be considered scientific knowledge. Moreover, neither the methods that are used to elicit and explore those subjective understandings, nor the data collected by applying those methods, nor the analysis and interpretation of that data, will be considered appropriate to produce scientific knowledge. Therefore, when designing research, the researcher’s choice of methods will be limited to those methods able to produce the type of data able to produce the objective knowledge believed by this researcher to be scientific. This paradigmatic stance will also affect the way that this researcher judges the research design of others, and the results produced by those designs. For example, this researcher will be able to dismiss as invalid research findings based on the interpretation of data gained from, and about, the subjective experiences and perceptions of individuals. This dismissal is premised on, and made possible by, this researcher’s view of, and basic set of beliefs about, the objectivity of scientific knowledge. This researcher considers subjective understandings as constituting “an essentially different kind of knowledge from scientifically established facts” (Crotty, 1998, p. 27).
Chapter 4 • Why Methodology Matters When Designing Research   75 PUTTING IT INTO PRACTICE HOW PARADIGMATIC STANCE AFFECTS WHAT YOU STUDY AND HOW You are interested in turnover of staff in an organization. By turnover we mean staff leaving their positions in that organization. Your paradigmatic stance will influence what you will study about turnover and how. If your basic belief is that the principal point of difference between gaining knowledge in scientific and non-scientific ways is the objectivity of scientific knowledge, then you will design and conduct your research in a way that is in keeping with that belief. Therefore, you will likely use some sort of measurement of variables of interest you have identified as affecting turnover such as pay rates, flexibility of work time, and so on. You will then collect numerical data about these variables that is able to be obtained and analyzed in what you consider to be an objective way drawing on the principles of probability and statistics. Further, in keeping with your basic belief that the principal point of difference between gaining knowledge in scientific and non-scientific ways is the objectivity of scientific knowledge, you will not consider research designs that seek to find out about the perceptions and experiences of people working in that organization about why people either stay or leave their positions. This is because this type of subjective knowledge, while important, is not scientific. This means that you will not consider employing a more qualitative approach in your study and methods such as in-depth interviews associated with this approach. This is because you consider that this type of data, and the knowledge that it produces, is not scientific. The set of basic beliefs held by a researcher about, for example, what data, science, and the purpose of research is will be influenced by their disciplinary background. The dominant methodological schools of thought and method-related traditions, in that discipline, will affect how that researcher thinks when designing their research. Therefore, when you are designing research, you need to think about how your “disciplinary socialization experiences . . . may reduce . . . [your] methodological flexibility and adaptability” (Patton, 2015, p. 92). Recognizing, examining, and understanding how your “social background and assumptions can intervene in the research process” (Hesse-Biber & Leavy, 2006, p. 141) is an important part of the reflexive process of designing your research. Before we leave this introductory discussion of paradigms as sets of beliefs or worldviews that guide our thinking when designing our research, it is important to point out that there are various ways used when referring to the philosophical influences and assumptions that impact our thinking about research design. Merriam and Tisdell (2016) point out that these include traditions and theoretical underpinnings (Bogdan & Biklen, 2011), theoretical traditions and orientations (Patton, 2015) . . . paradigms and perspectives (Denzin & Lincoln, 2011), philosophical assumptions and interpretive frameworks (Creswell, 2013) . . . epistemology and theoretical perspectives (Crotty, 1998, p. 8). Consequently, there is “no definitive way to categorize the various philosophical and theoretical perspectives” (Patton, 2015, p. 85) that have shaped and defined ways of thinking about and designing research. Therefore, it is important that when designing your
76  Research Design research, you make clear what paradigm(s) or perspective(s) you are drawing on in your research design and are consistent in the terminology you use to refer to them. ONTO-EPISTEMOLOGICAL DERIVED ASSUMPTIONS UNDERPIN METHODOLOGICAL THINKING Different paradigmatic stances draw on different ontological and epistemological assumptions. Ontology and epistemology are philosophically derived concepts about the nature of reality and how we know what we know about that reality (Crotty, 1998). Simplifying to the extreme, ontology is about “philosophical assumptions about the nature of truth and reality and whether it is external or constructed ” (Creamer, 2018, p. 43). It is about what exists and what can be considered real. There are different views or ontological positions about the nature of reality. Viewing the world as having an external objective reality independent of human perceptions of it gives rise to an ontological position known as realism. In this ontological view, the world exists independently of those living in that world (Schwandt, 2015). On the other hand, viewing the world we live in as constructed by our interactions with that world and our perceptions and understandings of it gives rise to an ontological position known as relativism. In this ontological position, the world is understood as made up of socially constructed meanings. These everyday social interactions and constructions make up the way that the world is. The world does not exist independently of those in it. Epistemological considerations focus on questions such as, What do we mean when we say that we know something? and What enables us to claim that we know that? How does knowledge differ from opinion or belief? What “constitutes credible or warranted conclusions or inferences” (Creamer, 2018, p. 43)? Therefore, epistemological considerations when designing our research focus on the type of knowledge we will need to produce from our research in order to be able to produce credible or warranted conclusions or inferences. In turn, this will require us to think about how we will produce that particular type of knowledge—methodological and methods related considerations. This will include considerations about the role of the researcher in the generation of that knowledge.2 What ontological viewpoint we adopt affects our epistemological considerations when designing research. For example, a researcher may view, or understand, the world in a way that is congruent with the ontological position called realism. Therefore, this researcher believes that the way that the world actually is must be distinguished from the way that individuals in that world interpret it (O’Reilly & Kiyimba, 2015). The goal of research then is to study this objective reality in order to ascertain scientific facts about it. Put another way, “[T]he facts of the world are essentially there for study. They exist independently of us as observers, and if we are rational we will come to know the facts as they are” (Gergen, 1991, p. 91). It is these facts that are real. The role of the researcher is to discover them. On the other hand, a researcher may view the world in a way that is congruent with the understandings of the ontological position called relativism. In this ontological position, the world can be understood as made up of socially constructed meanings. It is these everyday social interactions and constructions that make up the way the world is. The world does not exist independently of those in it. Such an ontological view does not consider that the only things that can be studied are the objective facts already “out there” waiting for us to discover them. Rather it allows for the possibility of multiple realities
Chapter 4 • Why Methodology Matters When Designing Research   77 comprised of the everyday social interactions and constructions that make up those realities. Consequently, human understandings and social constructions are legitimate areas for research and scientific study. It is possible to make credible or warranted conclusions or inferences (Creamer, 2018) from the study of socially constructed meanings. PUTTING IT INTO PRACTICE WHAT CAN WE KNOW ABOUT MUSIC? MUSIC VIEWED FROM DIFFERENT ONTOLOGICAL AND EPISTEMOLOGICAL POSITIONS Music provides an example of how we can understand aspects of the world we live in in different ways and how those understandings impact what we might seek to know about those aspects. What music is can be understood in different ways. In the natural sciences, music is understood as being comprised of sound waves that are produced by the vibration of the particles of the medium through which the sound waves are moving. Based on this understanding of music, you might design research to measure the wavelengths and frequencies of sound waves produced by different types of music. Or you might be interested in the effect of these sound waves on the anatomy of peoples’ ears as they listen to music at different volumes and the sound waves travel down the auditory canal and strike the eardrum, causing vibrations in tiny bones in the middle ear. Your role as researcher will be as an objective collector of information that you will then use to support or disprove hypotheses based on the collection and measurement of quantitative data. However, if you understand music as a subjective art form derived from the interaction of sound and how the person hearing it experiences it, then it is possible to know other things about music. For example, you might be interested in knowing about the thoughts and feelings that people associate with different types of music and why they do. Why is it that certain types of music (usually in a minor key and with a slow tempo, such as for example “Ave Maria” by Schubert) may provoke intense feelings of sorrow? Why do other types of music (usually in a major key and with a moderate to fast tempo, such as club songs, national anthems, or coronations) provoke feelings of happiness or even exhilaration on the part of those listening? Why is it that listening to a piece of music can enable people living with dementia to relive positive memories triggered by that specific music? 3 What do people subjectively experience when listening to that music and how might this knowledge be used to develop, for example, therapeutic interventions for people living with dementia? Your role as a researcher is to be an interpreter of information designed to build understandings of the way that perceptions of music are socially constructed. This example highlights that different views about what music is (ontology) affect how, and what, you can know about music (epistemology), and hence the way that you will research it (methodology and methods). To reflect the way that ontological and epistemological assumptions interface when designing research, you will see scholars referring to the onto-epistemological assumptions that underpin methodological thinking. What onto-epistemological view we adopt affects the methodological decisions we make when we design our research. Onto-epistemological assumptions underpin and shape our research design whether we realize it or not. This is because they affect the methodological decisions we make
78  Research Design when we design our research. While it may be possible to design and conduct research without overtly acknowledging this, or even really thinking about our own assumptions about reality and what can be considered knowledge about that reality, the effect of such assumptions on the research in question is still there. Therefore, one of the first things that you will have to do when designing your research, after you have worked out what your research questions are, is to establish “which paradigm and subsequently which methodology or strategy can best answer the research question” (Welford et al., 2012, p. 29). In the next section of the chapter, we will explore how understanding what an inquiry paradigm is can help you make appropriate methodological choices, including the methods you will use, when designing your research. INQUIRY PARADIGMS AND HOW THEY CONNECT TO METHODOLOGICAL THINKING Guba and Lincoln (1994) developed the idea of inquiry paradigms to overtly emphasize, and capture, the idea of ontology, epistemology, methodology, and methods as closely connected concepts. The intersections between ontology, epistemology, methodology, and methods affect what is studied, how it is studied, and the role that the researcher plays in that study. They define “for inquirers what it is they are about, and what falls within and outside the limits of legitimate inquiry” (Guba & Lincoln, 1994, p. 108). There are different types of inquiry paradigms and each one comprises “a set of philosophical assumptions that are inherently coherent about the nature of reality and the researcher’s role in constructing it that is agreed upon by a community of scholars” (Creamer, 2018, p. 43). These assumptions guide the methodological decisions that we make. Inquiry paradigms provide us with an organizing construct for thinking about the philosophical influences and assumptions that impact on how research is thought about, and therefore designed. In the discussion to follow, we take a brief look at three different inquiry paradigms—positivism, post-positivism, and constructivism. When doing so, we demonstrate the effect that onto-epistemological assumptions that are embedded in them have on a researcher’s methodological thinking when designing their research. Our aim is to unpack the point made by Lather (2006) that “science is not the same in all paradigms in terms of ontology, epistemology and methodology” (p. 37). Positivism Positivism has been a dominant inquiry paradigm in western scientific thought for many hundreds of years. Positivism is based on a “conviction that scientific knowledge is both accurate and certain” (Crotty, 1998, p. 27). The task of the researcher is to study the reality that already exists “out there” (Lincoln & Guba, 2013, p. 38). Consequently, positivism is premised on a realist ontological position. The task of the researcher is to discover and establish objective facts about the aspect of that reality being studied. Therefore, the methods used in scientific research to produce scientific knowledge must enable the production of that objective knowledge. The belief that inquiry must be objectively carried out to discover “how things really are and really work” . . . in turn suggests a methodology which is essentially experimental and manipulative, in an effort to sort out the various influences (“variables”) that determine the true state of affairs, and eliminate the confounding ones. (Lincoln & Guba, 2013, p. 38)
Chapter 4 • Why Methodology Matters When Designing Research   79 In this way of thinking, what can be considered science is reduced to, and only includes, what we can observe, measure, and then explain using those measurements. In turn, this reduces ways of doing research, and producing what can be called science, to scientific methods designed to observe phenomena—“the observation in question being scientific observation carried out by way of the scientific method” (Crotty, 1998, p. 20). The scientific method is a way of thinking about, and doing research derived from the way research has been done, and science produced, in the physical and natural sciences. Research designed according to the scientific method should • have a measurable/testable hypothesis (or testable propositions within the hypothesis), • have steps to test the hypothesis that must be able to be repeated, • be conducted as objectively as possible, • outline the research method/design in enough detail that it can be replicated by other researchers and the findings either supported or disproved. (Adapted from Cutcliffe & Harder, 2012, p. 3) Other views of research, science, methodology, and methods are deemed invalid on the basis that they are not scientific as they do not conform to “the” scientific method. TIP KEY FEATURES OF POSITIVIST THOUGHT A useful discussion of the key features of both positivistic thought itself and the research that results from the assumptions on which this thought is founded is provided by Guba and Lincoln (1994, pp. 109–110). In Table 4.2, we present a summary of their key points. TABLE 4.2 ■ Ontology: Realism Epistemology: Dualist and objectivist Key Features of Positivist Thought • An apprehendable reality is assumed to exist, driven by immutable natural laws and mechanisms. • Knowledge of the “way things are” is conventionally summarized in the form of time- and context-free generalizations, some of which take the form of cause-effect laws. • Research can, in principle, converge on the “true” state of affairs. • The investigator and the investigated “object” are assumed to be independent entities, and the investigator to be capable of studying the object without influencing it or being influenced by it. When influence in either direction (threats to validity) is recognized, or even suspected, various strategies are followed to reduce or eliminate it. • • Inquiry takes place as through a one-way mirror. • Replicable findings are, in fact, “true.” Values and biases are prevented from influencing outcomes, so long as the prescribed procedures are rigorously followed. (Continued)
80  Research Design TABLE 4.2 ■ Methodology: Experimental/ manipulative Key Features of Positivist Thought (Continued) • Questions or hypotheses are stated in propositional form and subjected to empirical test to verify them. • Possible confounding conditions must be carefully controlled (manipulated) to prevent outcomes from being improperly influenced. Source: From Guba and Lincoln (1994, pp. 109-110). Critiques of Positivism There is no doubt that in the physical and natural sciences, hypothesis driven deductive studies are appropriate ways to study that physical reality. However, debates and disagreement arise among researchers when this view of science and research is used to reject outright other ways that research and science might be thought about. Such outright rejection restricts possibilities for thinking about research to the same type of thinking and research designs used to study the natural sciences. It assumes that “the natural and social sciences should and can apply the same principles to collecting and analyzing data” (Flick, 2015a, p. 20). Not surprisingly, many scholars in the social sciences have critiqued this assumption. These scholars question if it is possible to limit study of social settings, and the people in them, to the type of research designs that have developed from the natural sciences. This is because the social sciences are strongest where the natural sciences are weakest; just as the social sciences have not contributed much to explanatory and predictive theory, neither have the natural sciences contributed to the reflexive analysis and discussion of values and interests, which is the prerequisite for an enlightened political, economic, and cultural development in any society. . . . [S]ocial science never has been, and probably never will be, able to develop the type of explanatory and predictive theory that is the ideal and hallmark of natural science. (Flyvbjerg, 2001, pp. 3–4) You will note that Flyvbjerg’s critique is not about the types of methods, data, and knowledge associated with the positivist inquiry paradigm per se. Rather the critique is about only allowing one view of what science is, and how scientific research must be “done.” As Crotty (1998) reminds us, “If we want to quarrel with the positivist view, our quarrel . . . will have to do with the status positivism ascribes to scientific findings. Articulating scientific knowledge is one thing; claiming that scientific knowledge is utterly objective and that only scientific knowledge is valid, certain and accurate is another” (p. 29, italics added). Post-Positivism As a result of the sort of critiques of positivism discussed above, largely from the social sciences, different inflections and forms of positivism emerged.4 For example, post-positivism5 emerged as a reaction to the basic beliefs underpinning positivism that “real knowledge (as opposed to mere beliefs) was limited to what could be logically deduced from theory, operationally measured, and empirically replicated” (Patton, 2002, p. 92). Instead, post-positivist thought recognizes that no matter what methodological approach and associated methods form part of a research design, it is impossible to entirely remove the influence of the researcher on both that design, and when that research is put into action. Therefore,
Chapter 4 • Why Methodology Matters When Designing Research   81 “[N]o matter how faithfully the scientist adheres to scientific method, research outcomes are neither totally objective nor unquestionably certain” (Crotty, 1998, p. 40). In other words, there is always a subjective element in research and in the results that arise from that research. Consequently, from a post-positivist perspective, research is designed to distinguish between beliefs and valid beliefs, rather than to produce absolute truths (Campbell & Russo, 1999). Whether or not a belief is valid is based on judgments about the validity of the empirical evidence that the belief is based on, and how that evidence was obtained and analyzed. Therefore, the task of the researcher when designing research is to develop a scientific strategy of “managing subjectivity as tenaciously as possible, to come as near to ‘truth’ as human frailty permits. Experimentation and manipulation are retained as the basic methodological strategy, even while it is conceded that they cannot produce ultimately infallible results” (Lincoln & Guba, 2013, p. 38). Put another way, post-positivism is a “humbler version of the scientific approach . . . proponents of which . . . remain in the positivist camp but temper very significantly the status they ascribe to their findings, the claims they make about them” (Crotty, 1998, p. 40). Positivism and post-positivism are not the only possible inquiry paradigms and ways of thinking about research, science, and research design. There are other inquiry paradigms and related approaches to research that take a different view of what falls both within, and outside of, the limits of legitimate inquiry. One such inquiry paradigm is constructivism. Constructivism Compared to positivist and post-positivist thought, constructivist thought takes a very different view of reality (ontology), what can be known about that reality, and how we can know that (epistemology), and consequently what is, and is not, legitimate inquiry (methodology and associated methods). Unlike positivism and post-positivism, constructivism does not presuppose a reality removed from those who interact with that reality. Rather it begins with “the premise that the human world is different from the natural, physical world and therefore must be studied differently” (Patton, 2015, p. 121, italics added). Truth is not waiting “out there” in an object to be discovered, but rather meaning is constructed by the interaction of humans with that object and each other. Consequently, constructivism allows for the idea of multiple realities which exist “in the form of multiple mental constructions, socially and experientially based, local and specific, dependent for their form and content on the persons who hold them” (Guba, 1990, p. 27). Therefore, working within a constructivist paradigmatic stance, the researcher does not seek to eliminate the subjective thoughts, feelings, and opinions of those being researched to concentrate only on specific objective “facts” and variables that must be controlled. Instead, the researcher actively seeks to understand “the complex world of lived experience from the point of view of those who live it” (Schwandt, 1994, p. 118). This is reflected by the fact that people in the study are referred to as participants, rather than subjects of the research. If we adopt this paradigmatic view when we are designing our research, then, • we will work with participants in our research as “a co-constructor of knowledge, of understanding and interpretation of the meaning of lived experiences” (Lincoln et al., 2011, p. 110). • our research will be designed in such a way to enable us to seek out, obtain and understand our research participants’ individual views, perceptions and understandings about the aspect(s) of their reality that we are interested in knowing something more about.6
82  Research Design • choosing to use methods such as individual in-depth interviews, observations in social settings7 as part of our research design would be methodologically appropriate. • our research will be primarily inductive as the information obtained from research participants is used to build understandings of the aspects of the social constructions that we are focusing on in our research. This is a very different view of research and the role of the researcher than in positivist or post-positivist thinking, where the role of the researcher is to obtain information about specific variables of interest from the people who are the subjects of, rather than participants in,8 that research. These variables of interest have been determined in advance of the research by the researcher as the ones that can explain or enable the researcher to understand what is going on in a particular situation. The role of the researcher is to, as objectively as possible, obtain from these research subjects specific piece(s) of information in the form of data about variables of interest that the researcher can then use to deductively test theories. In these inquiry paradigms, the researcher deliberatively maintains distance from the subjects (as opposed to participants) of that research. Activity Exploring How Inquiry Paradigms Affect Our Thinking of About “What Falls Within and Outside the Limits of Legitimate Inquiry” You want to study why children succeed at school. 1. Think about the way that you might go about studying this topic if you draw on post-positivist thinking when doing so. How will this affect what your research questions are (and are not)? In turn, how will this affect the methods that you will use in your study and the type of knowledge you will produce about success at school? 2. Now repeat the above exercise but this time drawing on constructivist thinking. How will this affect what your research questions are (and are not)? In turn, how will this affect the methods that you will use in your study and the type of knowledge you will produce about success at school? Differences in your answers to 1 and 2 above highlight the effect of the sets of basic beliefs or paradigmatic assumptions that you bring to your research have on the way that you design and think about that research, and “what falls within and outside the limits of legitimate inquiry” (Guba & Lincoln, 1994, p. 108). INQUIRY PARADIGMS AFFECT THINKING ABOUT WHETHER RESEARCH IS CREDIBLE The basic beliefs that make up inquiry paradigms affect whether or not a research design, and the findings produced by it, can be considered credible. This is because the onto-epistemological assumptions embedded in those basic beliefs shape understandings of what is legitimate inquiry in a scientific study. This includes the methods chosen in that study and the type of data they produce.
Chapter 4 • Why Methodology Matters When Designing Research   83 For example, research drawing on a constructivist inquiry paradigm is designed to produce trustworthy “reconstructed understandings of the social world” (Denzin & Lincoln, 2018c, p. 98). It is not necessarily designed to identify patterns or consistencies in these findings which would be the case if the goal of the research is to produce findings that are objective, and hence generalizable across groups of people. Instead, “Each person who participates in the study provides a different view on the topic being investigated. . . . Dissonant points of view are acceptable [illustrating] the complexity of multiple realities” (Norum, 2008, p. 737). Therefore, different criteria will be used to judge the credibility of research if the researcher draws on a post-positivist or positivist paradigmatic stance than if they draw on a constructivist paradigmatic stance. In a constructivist inquiry paradigm, positivist derived criteria of validity 9 “are replaced by such terms as trustworthiness and authenticity” (Denzin & Lincoln, 2018c, p. 98). The trustworthiness and authenticity of the reconstructed accounts of aspects of the social world that form the findings is what makes the research, and the conclusions reached from research drawing on the set of basic beliefs underpinning a constructivist inquiry paradigm, credible.10 This highlights that the way that credibility of research is understood arises from the paradigmatic stance that a researcher brings to that research. Whether or not research is considered credible (or even research at all) will be based on the basic beliefs that make up that paradigmatic stance. Therefore, judgments about the credibility of research make no sense unless they are in relation to a particular worldview, or basic set of beliefs, about research. Criteria for making judgments about the credibility of research do not automatically transfer from one paradigmatic stance to another. Patton (2015) provides a useful summary of the different ways in which quality and credibility of research can be thought about, depending on the paradigmatic view adopted by a researcher. He does this by identifying alternative sets of criteria for doing so. This summary can be found in Table 4.3. TIP ALTERNATIVE SETS OF CRITERIA FOR JUDGING THE QUALITY AND CREDIBILITY OF RESEARCH ARISING FROM DIFFERENT PARADIGMATIC STANCES11 TABLE 4.3 ■ Traditional Scientific Research Criteria 1. Objectivity of the inquirer (minimize bias) 2. Hypothesis generation and testing 3. Validity of the data 4. Interrater reliability of codings and pattern analyses 5. Conclusions about the correspondence of findings to reality 6. Generalizability (external validity) 7. Strength of causal explanations (attribution analysis) 8. Contributions to theory 9. Independence of conclusions and judgments 10. Credibility to knowledgeable disciplinary researchers (peer reviews) (Continued)
84  Research Design TABLE 4.3 ■ Social Construction and Constructivist Criteria (Continued) 1. Subjectivity acknowledged (discuss and take into account inquirer perspective) 2. Trustworthiness and authenticity 3. Interdependence: relationship based (intersubjectivity) 4. Triangulation (capturing and respecting multiple perspectives) 5. Reflexivity 6. Particularity (doing justice to the integrity of unique cases) 7. Enhanced and deepened understanding (verstehen) 8. Contributions to dialogue 9. Extrapolation and transferability 10. Credible to and deemed accurate by those who have shared their stories and perspectives From Patton, M. Q. (2015). Qualitative Research & Evaluation Methods (4th ed.). SAGE. (Part of Exhibit 9.7, p. 680.) Why Is Thinking About Your Paradigmatic Stance Important? Our discussion of how inquiry paradigms affect thinking about whether research is credible demonstrates why thinking about what your paradigmatic stance is when you are designing your research is important. It forces you to overtly focus on the intersections between your methodological choices in your research design and the set of basic beliefs you hold that resulted in you making those choices. This is a focus which has almost disappeared in discussions in many research methods courses and textbooks where students are more often than not taught particular “methods of data collection” (such as interviews, case studies, focus groups, ethnography, other basic research design techniques, etc.). . . . [I]t is few and far between that philosophy of science and philosophy of inquiry seminars are required of graduate students—and even fewer still . . . that call into question or contest the very notion of data or evidence itself (Denzin & Giardina, 2016, p. 6). The loss of this focus has resulted in many of these courses, and the methods textbooks they use, presenting methods exclusively as ready-made, stand-alone, and self-contained techniques or procedures. These techniques and procedures can then be selected, copied, and pasted into a research design. This masks the fact that when you select, copy, and paste a ready-made method into your research design, you are copying and pasting much more than just a technique or procedure. You are also copying and pasting the methodological and onto-epistemological understandings from which this method was derived. Therefore, it is important that you know what these understandings are, to make sure that they are congruent with obtaining the type of knowledge that you will need to address your research questions in order for your research findings to be credible.
Chapter 4 • Why Methodology Matters When Designing Research   85 THE IMPORTANCE OF ASKING METHODOLOGICAL QUESTIONS OF YOUR RESEARCH DESIGN Returning to a point we made in Chapter 1, between us, we have many years of experience both doing research ourselves and acting as advisors for students’ research. We have noticed that the researchers and students who navigate the challenges of the process of designing research well are those who take the time to think through, ask questions of, and then make explicit the methodological decisions that they have made related to the design of their studies. They are constantly asking themselves a series of interrelated questions about the methodology of that emergent design. Questions such as, What type of knowledge will I need to address the problem or questions I want to ask? What is an appropriate way to obtain that knowledge? Appropriate in what sense? They are also the researchers and students who reflexively think through the effect that the way they answer the above types of questions has on their overall research design. They ask themselves questions such as these: • What happens to this part of the design if I make this methodological decision and not another? • How might this affect decisions that I have already made about other parts of my research design? • Why am I thinking about these questions, and adopting a stance in relation to them, in the way I am? Asking these types of methodological questions of your developing research design enables you to reveal and understand your own paradigmatic viewpoint—the set of basic beliefs you have about what research and science are. It also enables you to recognize and think through the effects of those standpoints on the choices you have made in that design process. This type of reflexive thinking makes you accountable, and thereby forces you to accept responsibility, for the methodological decisions that you make when designing your research. This is because this thinking enables you to expose, and therefore become aware of, assumptions that you, or others, may be making about research, methods, data, and what valid scientific knowledge is, or even can be. Activity Thinking Through These Types of Key Questions When Designing Your Research Think of a topic or research questions that you are interested in studying. Now write down what you are thinking of studying about that topic and how. Next write down why you are thinking about studying that topic in this way. Think about what assumptions you might be making about what research, methods, data, and what valid scientific knowledge is, or even can be. Try to work out why you are making those assumptions. What are you basing them on?
86  Research Design Avoiding the Misuse of Methods Understanding the methodological and onto-epistemological understandings from which a method is derived can also help avoid the misuse of methods. This is because “abuses and misinterpretations/misapplications of a method” (Cutcliffe & Harder, 2012, p. 13) are more likely to arise if a method is detached from these understandings.12 Neither methods nor the data produced by them can be understood apart from the methodological considerations and choices in the overall research design in which they are embedded. If removed from these understandings, methods are reduced to being merely techniques—sets of procedural rules to follow. This way of thinking also helps us to understand, and provides new ways of looking at, why students (and even more experienced researchers) constantly struggle with methodological related questions such as these: • Do I need to use a hypothesis in my research design? • Is there one type of method that is more scientific than another? • Is this really research or just opinion? • What is the use of this approach if I can’t predict and generalize at the end of the study? • Why won’t my supervisor or thesis committee allow me to use a particular methodological approach? • Am I more likely to get my research published or funded if I use this method rather than another? CONCLUSIONS The way that any research design is thought about and developed is informed by onto-epistemological, methodological, and method related assumptions that shape and inform our research design at all parts of the process—sometimes without us even being aware of it. This is why we have emphasized the importance of remembering “science is not the same in all paradigms in terms of ontology, epistemology and methodology” (Lather, 2006, p. 37). The intersections between our ontological, epistemological, and methodological assumptions must be acknowledged, and their effects understood, when thinking about our research design. This is because these intersections profoundly affect all areas and aspects of that design. Therefore, it is important when designing research to think about, recognize, and acknowledge these intersections and how they impact the decisions we make when we design our research. How will, do, or have they influence(d) our thinking and choices at all parts of the research design process—including our choice of methodology and associated methods? This is a type of reflexive thinking that enables you to focus your thinking on the connections between the parts of your research design, rather than focusing on each part of
Chapter 4 • Why Methodology Matters When Designing Research   87 that design independently and in isolation. These key methodological related questions include the following: 1. What type of knowledge will I need to answer the questions that my research is designed to answer? 2. What methods will I use in my research design to generate data, the analysis and interpretation of which can produce this knowledge? 3. What methodological considerations govern the choice and use of those methods? 4. Is my choice of methods consistent with those methodological considerations, and if so, how? 5. What onto-epistemological assumptions does that methodology, and therefore the methods derived from that methodological thinking, draw on? Such iterative thinking makes it possible for you to ensure that all the considerations, assumptions, and choices that you make in your research design related to matters of data, method, methodology, and onto-epistemology are consistent with each other, and also with what you need to know or find out in order to address the research question(s) you are asking. When thinking about how to answer these types of questions, it is useful to keep in mind Denzin’s (2009, p. 318) reminder about what is “good work” when making methodological and method related decisions in our research design. He notes that any “criteria of good work” apply only to work within a particular paradigm or understanding of research and science, and not necessarily within other paradigms. This is why thinking about research design and what makes for “good,” “better,” “best” research and science is methodologically specific and cannot be generalized across different understandings of research and science. “Better” methods will be those that enable the collection of a form of data, the analysis and interpretation of which can provide the type of knowledge needed to address the purpose of the research. This is why methodology matters when we are designing research. In the next chapter, we continue our discussion of why methodology matters. We do this by exploring the methodological thinking that underpins qualitative and quantitative approaches to research and how they are designed. SUMMARY OF KEY POINTS • Thinking methodologically enables us to understand the choices that researchers make related to the methods they use in their research. It is a central plank of any research design. • Decisions about what form data can take, how it can be obtained and analyzed, and what it can be used for are methodological ones. • Paradigms are sets of basic beliefs that guide the choices and decisions of researchers such as what can be studied, how, and why. • Paradigms differ from one another in terms of the onto-epistemological assumptions that they draw on.
88  Research Design • Ontology and epistemology are philosophically derived concepts about the nature of reality (ontology) and how we know what we know (epistemology) about that reality. • Inquiry paradigms provide a way of thinking about how onto-epistemological considerations shape and intersect with the methodological decisions we make when designing our research. • This includes decisions about what can be studied, how it can be studied, and what form the findings must take in order for the research to be considered credible. • When you decide to apply a specific method for data collection in your research, you adopt the methodological and onto-epistemological understandings from which this method was derived. • Therefore, you must know what these understandings are in order to make sure that they are congruent with the type of knowledge that you will need to address your research questions. • It is important to remember that “science is not the same in all paradigms in terms of ontology, epistemology and methodology” (Lather, 2006, p. 37). • Consequently, neither methods nor the data produced by them can be understood apart from the methodological considerations and choices in the overall research design in which they are embedded. KEY RESEARCH-RELATED TERMS INTRODUCED IN THIS CHAPTER constructivism credibility of research data epistemology inquiry paradigms onto-epistemological assumptions ontology paradigm paradigmatic stance positivism post-positivism realism relativism scientific method SUPPLEMENTAL ACTIVITIES 1. What are your “paradigmatic inclinations” (Nagel et al., 2015, p. 367)? Read the following article: Nagel, D. A., Burns, V. F., Tilley, C., & Aubin, D. (2015). When novice researchers adopt constructivist grounded theory: Navigating less travelled paradigmatic and methodological paths in PhD dissertation work. International Journal of Doctoral Studies, 10, 365–383. This article provides an excellent example of how the authors reflexively navigated a range of choices when deciding on “a methodological approach that (a) was appropriate to answering our research question, (b) resonated with the philosophical values for knowledge development within our disciplines, and (c) fit our personal beliefs, values, and goals” (p. 336). The authors conclude
Chapter 4 • Why Methodology Matters When Designing Research   89 that anyone doing research “must be able to rationalize and defend each choice made in the research design” (p. 380), which includes “gaining insight to one’s own understanding of the epistemological, ontological, and methodological underpinnings” (p. 380) of the chosen research design. Now, ask yourself the following questions: • What are your “paradigmatic inclinations” (Nagel et al., 2015, p. 367)? • How are you orienting yourself toward a methodological approach to answer your research questions? • What are your beliefs about how those questions can be answered? • Are there other possible ways of addressing your research questions? • If there are, will those other ways produce different knowledge than your preferred approach? • Does it matter? 2. What might you be taking for granted when you design your research? Go back to the activity box Self-test: Are you taking the idea of data itself for granted? in this chapter and read it one more time. Now, answer the questions below: A. When you read the term scientific in a research report, or think about how to design research to make sure that your research is scientific, what do you understand scientific to mean? B. When you read the terms credible or trustworthy in a research report, or think about how to design research to make sure that the findings of that research is credible and trustworthy, what do you understand credible and trustworthy to mean? C. When you read the term knowledge in a research report, or when you, while in the process of designing research, join conversations in the body of knowledge built up by the work of others (and which is relevant to the various areas that make up your research design), what do you understand knowledge to mean? • What do you base your answers to A, B, and C on? • Do you think about if there are other ways that the terms scientific, credible, trustworthy, and knowledge can be thought about besides the way you think about them, or the way that they are presented in a particular research design or report? • If you did allow for the fact that different people may have different ideas about what scientific, credible, trustworthy, and knowledge means, then did you consider how different ways of thinking about these terms affect the way we think about, design, and report research? • Have you been assuming that everyone else thinks about these terms in the same way as you do? If so, why? • Or perhaps you didn’t really think about any of this at all or consider it a part of the thinking that you will need to do when designing your research? If so, why not? Reflexively thinking about this series of questions (and others like them) will provide you with insights into assumptions you have about what research is and therefore how it might be designed.
90  Research Design FURTHER READINGS Crotty, M. (1998) The foundations of social science research. Meaning and perspective in the research process. Allen & Unwin. Greener, I. (2011) Designing social research. A guide for the bewildered. SAGE. NOTES 1. You will notice that, depending on the context, at times we refer to data as plural and therefore use “data are.” Other times we use “data is.” Many researchers, often those from areas such as mathematics, statistics, and computer science, understand data to be a plural noun. They think of data as a set of results related to a specific study. They therefore write “data are” and not “data is” when discussing that data set—the specific numbers or words collected by a researcher. However, you will also see data referred to as a concept not necessarily related to the specific findings of a specific study. In these instances, the term data is used in the same way as terms such as information. Therefore, we write “data is” just as we write “information is.” 2. We return to and develop this point in later parts of this chapter as well as in the discussion of qualitative and quantitative approaches to research in the next chapter. 3. See https://www.youtube.com/watch?v=OT8AdwV0Vkw 4. For a detailed discussion of this, see Crotty (1998), pp. 18–41. 5. You will see this term written in several different forms: post-positivist, postpositivist, post positivist being some of them. We refer to the term as post-positivist but when citing others retain the form of the term that they have used. 6. See also Chapter 5. 7. These qualitative methods are discussed in Chapters 6 and 7. 8. We return to this idea and explain it more fully in Chapter 5 9. The idea of validity is discussed in more detail in Chapters 8 and 9. 10. We return to the idea of trustworthiness and how to make decisions about it in Chapters 6 and 7. 11. Patton (2015) actually called this Alternative Sets of Criteria for Judging the Quality and Credibility of Qualitative Inquiry as he was writing a textbook about qualitative research and evaluation methods. However, the points he makes can be applied more widely than just to qualitative inquiry. 12. For example, see our discussion in Chapter 7 of how the logic of inquiry underpinning representative sampling in quantitative approaches is misapplied to the idea of maximum variation sampling in qualitative approaches.
5 QUALITATIVE AND QUANTITATIVE APPROACHES TO DESIGNING RESEARCH PURPOSES AND GOALS OF CHAPTER As the first of a series of five chapters about qualitative and quantitative approaches to designing research, this chapter explores the methodological thinking associated with, and underpinning, qualitative and quantitative research approaches. Our focus is the effect that this methodologically derived thinking has on the way that qualitative and quantitative research is designed. We demonstrate that the different logic of inquiry that underpins qualitative and quantitative research affects what form qualitative and quantitative data takes, how that data is collected and analyzed, and therefore what type of knowledge can be produced by the interpretation of that data. Throughout the discussion, we highlight that there is variation between types of qualitative approaches and types of quantitative approaches to research. Both approaches are diverse. Despite this, it is still possible to identify some common features within each approach. These features reflect the methodologically derived principles and understandings from which the design of qualitative research or quantitative research approaches emerge. These common features and principles provide a conceptual framework for the discussion about qualitative and quantitative approaches both in this chapter and in the chapters that follow it. In all of them, our level of focus is how the thinking that gave rise to these common features and principles affects the shape that your research design takes. This includes when you are making decisions about how to collect, analyze, and interpret data using either qualitative (Chapters 6 and 7) or quantitative (Chapters 8 and 9) methods. The goals of this chapter are to • Highlight that the terms qualitative and quantitative when applied to research, data, analyses, and methods are umbrella terms, neither of which can be reduced to a single set of understandings. • Emphasize that qualitative and quantitative approaches reflect different research purposes as well as different logics of inquiry. • Identify common features associated with qualitative and quantitative approaches, respectively. • Explore the variation between different types of qualitative, and different types of quantitative, approaches. 91
92  Research Design • Demonstrate that any understanding of qualitative and quantitative approaches to research cannot be detached from the methodological thinking and assumptions that shape those approaches. • Illustrate that therefore what makes a research design, or an aspect of that design, “qualitative” or “quantitative” is much more than simple dichotomies such as the data produced being nonnumerical as in the form of words (qualitative) or numbers (quantitative). • Emphasize that the answer to the question of whether a qualitative or quantitative approach is “better” or “best” is that “it depends.” This is because this question can only be answered in relation to a specific research design and a specific research problem and related research questions. QUALITATIVE AND QUANTITATIVE RESEARCH STRATEGIES It will not take you long when reading about ways of designing research to come across the terms qualitative and quantitative. For example, you will read reports of research described as using qualitative or quantitative approaches, or employing qualitative or quantitative methodology, or using qualitative or quantitative methods, or producing qualitative or quantitative data. You will also see entire research designs being described as qualitative or quantitative designs. Qualitative and Quantitative Approaches Reflect Different Research Purposes Qualitative research approaches are appropriate when the purpose of the research is to gain qualitative and deep insights about, and understandings of, people’s lives or social settings. Such insights “can help explain crises, injustices, and everyday life in new and intriguing ways. It may push you into uncomfortable spaces whereby you see things differently” (Mayan, 2009, p. 10). Consequently, qualitative research approaches, as their name suggests, will always produce nonnumerical data in the form of texts or words of some kind.1 The research will be designed in such a way as to enable qualitative interpretations of the perceptions or experiences of people about a specific aspect(s) of the everyday context(s) in which they exist. For example, people’s experiences, and perceptions of, voting (e.g., Hammond et al., 2021), or being made redundant (e.g., Karsavuran, 2021). Although procedures for the research will be identified beforehand, qualitative research designs are characterized by “built-in flexibility, to account for new and unexpected empirical materials and growing sophistication” (Denzin & Lincoln, 2005b, p. 376). On the other hand, quantitative approaches are appropriate when the purpose of the research is to quantify in some way something that you want to know about. Researchers who employ a quantitative approach typically aim to identify, classify, and quantitatively describe characteristics of, and tendencies in, a given (large) group of people (i.e., a study population). For example, how voting intentions are related to socioeconomic features such as educational attainment and employment status in a given population (e.g., Alabrese et al., 2019). To employ this sort of approach to inquiry you will need to follow set mathematical and statistically based procedures. Consequently, quantitative research
Chapter 5 • Qualitative and Quantitative Approaches to Designing Research   93 approaches, as their name suggests, will always produce numerical data from measuring something of interest. This is so that the data can be used to make valid probabilistic and statistically based interpretations about those specific characteristics.2 Consequently, whether we use a qualitative or quantitative approach in our research design depends on what the purpose of the study is. What are we trying to say something about at the end of the study? A Word of Caution A word of caution is needed at this point. Often, the different forms of the data produced by each approach (e.g., nonnumerical vs. numerical) is used to differentiate between qualitative and quantitative approaches and even define what these approaches are. For example, the “most basic definition of qualitative research is that it uses words as data . . . collected and analyzed in all sorts of ways. Quantitative research, in contrast, uses numbers as data and analyzes them using statistical techniques” (Braun & Clarke, 2013, pp. 3–4). While the use of the terms qualitative and quantitative to name and describe these approaches to research may reflect the type of data that these approaches generally produce, it is too simplistic to define what these approaches are on the basis of the form of data they produce. Rather, the form that data takes reflects the purpose of the research, as well as what type of information is needed to achieve that purpose. It also reflects the different logic of inquiry underpinning each approach. Qualitative and Quantitative Approaches Reflect Different Logic of Inquiry Inductive and deductive thinking, as we discussed in Chapter 3, “reflect different ways of shifting between data and concepts. Inductive approaches tend to let the data lead to the emergence of concepts; deductive approaches tend to let the concepts . . . lead to the definition of the relevant data that needs to be collected” (Yin, 2016, p. 100). Put another way, inductive and deductive thinking differ in the logic of inquiry that they employ. Quantitative Approaches Employ Deductive Reasoning Quantitative approaches, and quantitatively driven research designs, are often described as theory or concept testing and therefore deductive. This is because when designing their research, the researcher employing a quantitative approach begins with a defined set of variables or concepts related to the area of interest that they want to know something specific about. They then design their research to measure those variables and concepts or test assumptions about them. For example, if a researcher is interested in the broad area of media coverage of vaccination, there are a number of possible focuses related to that area. One possibility is that a researcher is interested in how the uptake of a certain vaccine is affected by media coverage about that vaccine.3,4 Specifically, they might want to know about whether any effect of media coverage identified is different across age groups, ethnic groups, or geographical locations, and they might want to test whether a specific hypothesis about such differences is supported by empirical evidence. For example, the hypothesis that the number of media reports questioning the safety of a specific vaccine influences the decline in vaccine uptake more among people born after 1995 compared to people born in 1995 or earlier.
94  Research Design Given this purpose, the research strategies employed will draw on quantitative research approaches. The researcher will identify in advance what aspects of that phenomenon (effect of media on vaccine uptake) they are interested in. They will work deductively when doing so. The goal of the research is to quantify the presence of, and links between, objective measures (such as age and vaccination uptake, ethnicity and vaccination uptake, or geographical location and vaccination uptake). Therefore, the researcher will develop a research design that enables them to produce the numerical data that they need to conduct the statistical analysis procedures they will need to do in order to establish those links. This type of thinking is called hypothetico-deductive thinking.5 This way of thinking may be summarized in the following way: 1. You start from one or more concept(s), or variable(s) (e.g., age and vaccination uptake, ethnicity and vaccination uptake, or geographical location and vaccination uptake), and the aspects you understand as being key elements of that or those concept(s). These aspects are theoretically or empirically derived at the outset of your study. 2. You formulate a hypothesis, that is, a testable statement that spells out specifically what it is you assume is true about the variables in your study (e.g., teenagers are less likely to make decisions about whether to have a vaccine based on media coverage about that vaccine compared to people over 20 years of age). 3. You collect data using a fixed measurement instrument (e.g., a questionnaire) designed to produce exactly the data you need to assess the plausibility of your hypothesis. 4. By paying careful attention to the design of the measurement instrument, the selection of participants, and the way you go about collecting your data, you can assess the degree to which your design and conduct of the study meet statistically derived criteria for arriving at credible research findings that can tell you whether or not your hypothesis is supported. Qualitative Approaches Predominantly Draw on Inductive Reasoning On the other hand, qualitative research designs and approaches are most often associated with inductively based reasoning where building theoretical and empirical concepts and understandings is the end point, rather than the starting point, for the research. As we discussed in Chapter 3, inquiry based on inductive reasoning does not set out with a predetermined set of specific variables to statistically assess in some way. Rather it sets out to obtain data that will enable the emergence of rich and thick description6 about, and in-depth understandings of, the problem or issue that the research is being designed to address. Theory can then be generated inductively from this “bottom-up” data. For example, within the broad area the of media coverage of vaccination, a researcher might be interested in focusing on how people experience and make sense of that media coverage about a particular vaccine. Does this coverage affect their choice to either be, or not be, vaccinated, and why? Is there anything else that affects their choice, and if so, what and why? In this example, the researcher has not set out to deductively test specific aspects of the link between media coverage and vaccination uptake, such as age, that they have hypothesized in advance. Instead, they are interested in exploring participants’ viewpoints about
Chapter 5 • Qualitative and Quantitative Approaches to Designing Research   95 that media coverage, that vaccine, and if this affected their decision about whether to have the vaccination. Why or why not? If so, how did it affect that decision? Therefore, they work inductively to find out about what is going on in the situation. In this case, they will draw on strategies of inquiry derived from qualitative approaches. Another Word of Caution Another word of caution is needed at this point. It is too simplistic to define what quantitative and qualitative approaches are on the basis of the logic of inquiry on which they draw— for example, that qualitative approaches are entirely inductive in nature versus quantitative approaches which are entirely deductive in nature. While it is the case that most qualitative research approaches are predominantly inductive, it is important to note that at times a researcher doing a qualitative study will follow up on issues or ideas that arise from the analysis of interviews or observations to find out more about that issue or idea. Therefore, the study will involve both inductive and deductive thinking throughout various points in that research design. The Putting It Into Practice box below has an example of this. We also point out that “nothing is wrong with taking a deductive approach” (Yin, 2016, p. 100) in a qualitative study from the outset if the issue being studied or the research questions require it. However, it is an unusual point of departure for a qualitatively driven study. There is, however, definitely something wrong with saying that all qualitative approaches must take a deductive approach and that the questions guiding that study must be in hypothesis form. PUTTING IT INTO PRACTICE AN EXAMPLE OF A RESEARCH DESIGN DESCRIBED AS “QUALITATIVE” THAT MOVES BETWEEN INDUCTIVE AND DEDUCTIVE THINKING THROUGHOUT VARIOUS PARTS OF THAT RESEARCH DESIGN We might begin a qualitative interview study about older people’s experience of interacting with technology. We begin inductively with no specific pre-research theoretically derived concepts to be focused on. The aim is to let the older person relate their experiences and the aspects of that experience that they consider important, rather than asking them to tell us about aspects of that experience that we have identified as being important or the focus of our study. During the analysis of our interviews, certain technologies might emerge as more problematic than others in terms of the older person’s experiences of interacting with them. We then might want to know more about why these technologies are more problematic than others for older people when interacting with them. Therefore, we might choose to conduct follow-up interviews specifically about this with those already interviewed, or we might identify and probe7 this issue when interviewing new participants. When we do this, we are acting deductively. However, we still allow participants to tell us what they find problematic, and thereby inductively explore what makes or why some technologies have been identified as being more problematic than others. We also might search the research literature to see if other studies related to older persons’ experiences of interacting with technology have identified this as a factor. If they have, then we can compare these studies’ findings to our own. In doing so, we are again acting deductively.8
96  Research Design Rounding Off Our Introductory Discussion of Qualitative and Quantitative Research Strategies What makes a research approach or design or aspects of that design “qualitative”? What makes them “quantitative”? As we have seen, these questions have no short or easy answers. This is because what makes an approach to research either qualitative or quantitative lies in the methodological thinking underpinning each of these approaches, and the assumptions about research and data embedded in this thinking. Both qualitative and quantitative research approaches are strategies of inquiry (Denzin & Lincoln, 2018a). These qualitative and quantitative research strategies “connect researchers to specific approaches and methods for collecting and analyzing empirical materials” (Denzin & Lincoln, 2018a, p. 313). What specific qualitative or quantitative approaches and methods you will include in your research design will depend on what type of knowledge you will need to address the research question that underpins your research. This is why Guba and Lincoln (1994) point out that concepts and terms such as qualitative and quantitative research are “secondary to questions of paradigm, which we define as the basic belief system or worldview that guides the investigator, not only in choices of method but in ontologically and epistemologically fundamental ways” (p. 105). It is these basic belief systems or worldviews (paradigms of some sort), and the logic of inquiry associated with them,9 that result in different forms of data (e.g., numbers and words and what type of numbers and words) being produced by the use of different methods that use different logics of inquiry to answer different types of research questions. With this in mind, this chapter aims to get beyond simplistic nonnumerical (e.g., words) versus numerical definitions of qualitative and quantitative research approaches to explore the thinking that sits in each of these approaches, and which gives rise to common features associated with qualitative and quantitative research approaches, respectively. In the next section, we discuss what these common features are. COMMON FEATURES ASSOCIATED WITH QUANTITATIVE AND QUALITATIVE APPROACHES It is possible to identify some common features of the ways of thinking that underpin quantitative, and qualitative, research strategies. It is important to remember that these features cannot be understood in isolation from each other. It is their interconnections that shape what we understand quantitative and qualitative approaches to research to be. Common Features Associated With Quantitative Ways of Thinking When Designing Research Common features of a quantitative way of thinking about research and ways to design and enact that research include the following. 1. Quantitative approaches to research are employed when the answer to our research question(s) (i.e., our methodological considerations) requires the production of some form of numerical data, which can then be analyzed using numerically based statistically derived strategies. As Greener (2011) puts it,
Chapter 5 • Qualitative and Quantitative Approaches to Designing Research   97 “Quantitative research is primarily concerned with techniques that analyze numbers. If we are calculating descriptive statistics (calculating averages, probabilities or exploring numerical relationships), then we are doing quantitative research” (pp. 2–3). 2. These approaches use a predominantly deductive way of thinking when deciding which questions to ask, what data to collect, and how to analyze that data. You will see this type of research referred to hypothetico-deductive. 3. Consequently, the methodology underpinning quantitative research aims at deriving estimates and deducing properties about a population supported by statistical analyses of data from a subset of that population. In other words, statistics are used as a tool to transform numerical data to research findings that can be generalized to a large group of people.10 4. This requires the use of standardized instruments for data collection. Such a standardized instrument may, for example, be a questionnaire as part of a quantitative survey. This questionnaire takes the form of a predefined list of questions, often with predefined response options. Requesting people’s answers to the specific list of questions makes it possible to fit “the varying perspectives and experiences of people . . . into a limited number of predetermined response categories to which numbers are assigned” (Patton, 2015, p. 22). The interest of the researcher “is in the aggregates rather than particular individuals” (Presser, 1985, p. 95). 5. Quantitative research designs, and methodologies and the methods that arise from those methodologies, are associated with, and derived from, positivist or post-positivist thought. 6. Given an emphasis on objectivity in positivist and post-positivist thought, quantitatively driven research approaches are designed to eliminate any form of subjective bias on the part of the researcher. This means that the ideal role of the researcher is that of the distant impassive observer.11 7. Therefore, quantitative approaches typically aim to arrive at objective, conclusive answers able to be applied to large populations of people by generalizing the results of statistically analyzed data from a relatively small subset of that population. 8. The number of people that we collect this variable specific data from, as well as the strategy for selecting those people, is determined by the rules of statistics and probability. You will often see sampling in quantitative research described as being probabilistic or aiming for statistical representativeness. This point is developed further in Chapter 8 of this book. 9. In quantitative approaches, the people in our research, our sample, are chosen because they exhibit or “fit” the parameters of the variables of interest in our study. For example, they might be of a certain age, living in a certain area, and of a particular income group. They are of interest to the study only in terms of these specific parameters.
98  Research Design 10. The credibility of quantitative research is judged on the basis of objective measures related to issues such as (a) the process used for selecting participants, (b) how many participate in the study, (c) how well the instrument you are using to measure the variables of the study actually does measure those variables, (d) how well the data produced by the measurement instrument fits with the statistical analyses performed on that data, as well as (e) the interpretation of the outcomes of those analyses. We return to unpack these points (a–e) further in Chapters 8 and 9. Common Features Associated With Qualitative Ways of Thinking When Designing Research While qualitative approaches to research vary, it is possible to identify some common features and/or principles of the type of thinking involved in qualitative research approaches: 1. Qualitative research, and the type of data that arises from it, is situated in the everyday reality or social world of the participants and sites that makes up the context for a specific study. Mayan (2009) calls this trying “to make sense of life as it unfolds” (p. 11). 2. The researcher observes and explores aspects of the world of the participants in the study in order to understand the way that they perceive, experience, and make sense of that world. “[A]ll qualitative research is interested in how meaning is constructed, how people make sense of their lives and their worlds. The primary goal of a basic qualitative study is to uncover and interpret these meanings” (Merriam & Tisdell, 2016, p. 25). 3. Consequently, the way that the data is collected is termed “naturalistic” in that it is directed toward understanding what is going on in a specific context or setting under everyday conditions. There is no experimental, or researcher, manipulation of that setting. Thus, “People will be performing in their everyday roles or will have expressed themselves through their own diaries, journals, writing and photography – entirely independent of any research inquiry” (Yin, 2016, p. 9). 4. Information about the context and setting of the research (the everyday of people’s lives) is as much a part of the data and findings of a study as are any data produced by, for example, an interview or observation conducted in that context. 5. The aim is for in-depth information or data from participants in the research on which to develop and base thick interpretations of “what is going on” from the point of view of people in a particular situation or context. 6. Hence, qualitative research is primarily inductive as the information obtained from research participants is used to build understandings of the aspects of the social constructions that we are focusing on in our research. 7. Qualitative inquiry is premised on the ontological position that the world can be understood as made up of socially constructed meanings. It is these everyday social interactions and constructions that make up the way the world is. The world does not exist independently of those in it.
Chapter 5 • Qualitative and Quantitative Approaches to Designing Research   99 8. Consequently, qualitative inquiry is associated with non-positivist (such as constructivist) inquiry paradigms that allow for the idea of multiple realities which exist “in the form of multiple mental constructions, socially and experientially based, local and specific, dependent for their form and content on the persons who hold them” (Guba, 1990, p. 27). 9. Done well, qualitative inquiry generates large amounts of in-depth and rich data about the participants and sites that the research is designed to find out something about. Thus, you will see qualitative data and approaches being described as providing multiple sources of rich or thick information or description about a situation or experience. 10. The researcher is present and actively involved in the research. They are “the primary instrument for data collection and analysis” (Merriam & Tisdell, 2016, p. 16). Most often they will collect the data and interact with the participants and study sites. For as Denzin (2010) reminds us, “The qualitative researcher is not an objective, politically neutral observer who stands outside and above the study of the social world. . . . A gendered, historical self is brought to this process” (p. 23). 11. Since the aim is for in-depth information, the researcher deliberately and carefully selects information-rich participants or sites for study. This is why you will often see sampling in qualitative inquiry described as purposeful or purposive as the researcher purposefully seeks to find people to interview who are information rich about what it is that we are trying to find out about—the purpose of the study. 12. Since purposeful sampling is based on logic and criteria derived from qualitative thinking, and not on logic and criteria derived from the laws of probability, it is referred to as non-probabilistic sampling. 13. The number of participants and/or sites in a qualitative study is usually smaller than in quantitative inquiry where the sample size is driven by statistical considerations. However, the numerically smaller sample in a qualitative study will contain more in-depth data about each person or site in the study. Such in-depth and rich information will enable the emergence of rich and thick description. 14. In qualitative research approaches, data analysis employs an “art of interpretation” (Denzin & Lincoln, 2000, p. xii) and aims for “thick interpretations” (Denzin, 2001, p. 52).12 This means that analysis of qualitative data does not simply describe what people said or did, but also interprets it. 15. During the analytical process, our thinking about, and working with, the data continually moves forwards and backwards examining, re-examining, thinking more about, collecting more data when necessary, and interpreting and re-interpreting our data.13 16. The idea of trustworthiness is used to make decisions about the veracity of the research. The trustworthiness and authenticity of the reconstructed accounts of aspects of the social world that inform the findings of the qualitative research is what makes that research credible.14
100  Research Design TIP DO NOT REDUCE YOUR THINKING ABOUT QUANTITATIVE AND QUALITATIVE RESEARCH APPROACHES TO A SERIES OF COMPARATIVE DOT POINTS At this point you may be expecting that we will tidy up and summarize the discussion so far by producing a table that compares and contrasts quantitative and qualitative research across a range of selected features captured in comparative dot points (such as numbers vs. words). By default, such a table then become the “definition” of each approach—a form of what Koro-Ljungberg (2016) refers to as “easily digestible overviews” (p. 6) of complex matters. In such an easily digestible overview, quantitative research usually becomes defined as a research approach (or very often simply as a method) according to a set of dot points such as it produces data in the form of numbers, takes a deductive approach to research, uses some form of survey or experiment to collect data, is objective in nature, and draws on a positivist/post-positivist inquiry paradigm. Qualitative research is then defined as a research approach (or very often simply as a method) according to a set of dot points set up in contrast to the corresponding dot points for quantitative research. For example, data is in the form of words rather than numbers, takes an inductive approach to research, uses some form of interviews, observations, or texts to collect data, is subjective in nature, and draws on non-positivist inquiry paradigms such as constructivism. An example of such a table is the one we have developed below (Table 5.1) and called An Example of the Definition by Dot Point of Qualitative and Quantitative Research. Note that in this type of “definition by dot point,” qualitative and quantitative research approaches are effectively set up as “polar opposites” (Crotty, 1998, p. 15). Each approach is defined, whether intended or not, in terms of how it differs from the other related to a series of criteria. TABLE 5.1 ■ An Example of the Definition by Dot Point of Qualitative and Quantitative Research Criteria Qualitative Research Quantitative Research Form data takes Words Numbers Data collection Interviews, observations, textual analysis Surveys, experiments Type of logic Inductive Deductive Type of paradigm Non-positivist (such as constructivist) Post-positivist Typical questions addressing Individual experiences, feelings How many, how much Analytic Focus Individuals—their subjective experiences, perceptions, understandings Large populations—objective measurements and predictions about how many or how much Note: If you are thinking of only using this table to define qualitative and quantitative research, please read the text in this section to find out why maybe this is not such a good idea. The text reveals that such dot points only make sense in light of the thinking that gave rise to them in the first place. However, simply using a series of comparative dot points such as the ones in Table 5.1 to base your understandings of quantitative and qualitative research approaches on is inadequate and limited. The thinking that sits behind, and which produced, those
Chapter 5 • Qualitative and Quantitative Approaches to Designing Research   101 dot points is missing. Further, such dot points risk losing sight of the fact that there is not just one way of thinking about, describing, designing, or doing either quantitative or qualitative research, just as there is not one form of the numbers or words that make up quantitative or qualitative data. Consequently, what we can, and often do, end up with are sets of fairly normative, somewhat empty, simplified dot points that do not reflect either the complexity or the diversity that is embedded in, and makes up, the terms qualitative and quantitative. Therefore, while dot points might be a suitable device to sum up key points of an extended discussion about the thinking that makes up qualitative and quantitative approaches, detached from that extended discussion they are not. It is important to have had the extended discussion before summarizing it in the form of dot points. This is because the dot points cannot be understood removed from the thinking that gave rise to them—the assumptions and understandings that sit behind each dot point. VARIATION WITHIN QUANTITATIVE AND QUALITATIVE RESEARCH APPROACHES Although it is possible to identify features common to quantitative research approaches and features common to qualitative research approaches, it is important to keep in mind that specific quantitative and specific qualitative approaches may vary widely. Not all approaches to quantitative and qualitative research are the same; they use different quantitative or different qualitative strategies of inquiry. Consequently, both quantitative and qualitative research approaches can be thought of as fields of study in their own right. In the same way that applied fields of study such as education, business, and nursing are made up of different schools of thought both theoretically and methodologically, so are the fields of quantitative research and qualitative inquiry. Therefore, the terms quantitative and qualitative when applied to research, data, analyses, and methods are umbrella terms, neither of which can be reduced to a single set of understandings. Just because two studies use quantitative research approaches, it does not necessarily mean that they will use those approaches in the same way. For example, they may use different methods to collect their quantitative data, or they may collect different types of quantitative data, or use different types of procedures to analyze that quantitative data. However, what the two quantitative studies will have in common is that they will be designed in such a way that they are in keeping with agreed principles or features common to all quantitative studies. Similarly, just because two studies use qualitative research approaches, this does not mean they will necessarily be designed in the same way. For example, they may use different methods to obtain their qualitative data, or they may obtain different types of qualitative data, or use different types of processes to analyze that qualitative data. However, what the two qualitative studies will have in common is that they will be designed in such a way that they are in keeping with agreed principles or features common to all qualitative studies. Quantitative Inquiry as a Diverse Approach There is variation between research approaches described as “quantitative.” A broad distinction can be made between descriptive approaches suitable for identifying and describing population characteristics, correlational approaches suitable for establishing the existence of, and interpreting, relationships between such population characteristics, and experimental/quasi-experimental approaches that aim to establish cause–effect relationships between the characteristics. What distinguishes these approaches from each other
102  Research Design is what they enable the researcher to say something about at the end of that quantitative study. Or, put another way, their potential, and therefore use, to resolve different types of quantitatively focused research questions. Descriptive and correlational approaches have the potential to address research questions about what is going on in a study population, while experimental/quasi-experimental approaches have the potential to address research questions about why something happens.15 For example, if your research question is about identifying phenomena in a population, or to compare the occurrence of phenomena across subgroups within that population, you will apply a descriptive quantitative approach. This is because a descriptive quantitative approach will enable you to map the state of things in that population, such as vaccination uptake across age, ethnicity, and geographical areas However, if your research question is about identifying trends and patterns related to what you are interested in, you will apply a correlational approach allowing for the detection and description of such patterns. For example, a correlational approach will allow you to investigate whether there is a pattern such as lower age corresponding to lower vaccination uptake, or less distance from the nearest vaccination center corresponding to higher vaccination uptake. Hence, a correlational approach allows you to say something about “what goes with what” (Oppenheim, 1992, p. 21) in a population. However, it is important to be aware that a correlational approach will not enable you to say anything about why x goes with y. To explain why x goes with y (for example, why lower age is connected to lower vaccination uptake), you need to apply an experimental/quasi-experimental approach. Such approaches enable you to make credible claims about cause and effect or causality. Or put another way, why x goes with y. In experimental or quasi-experimental research approaches, you develop a highly structured research design, making sure all variables under consideration are taken into account, while at the same time making sure that they are not affected by any other factors that might influence the outcome. An example of this type of highly structured quantitative research design is a Randomized Controlled Trial (RCT). A Randomized Controlled Trial (RCT) is a type of experiment where you randomly assign people to an experimental group and a control group to study the effect of a drug, treatment, or other intervention on a variable of interest. The intervention being tested is applied to the experimental group, but not to the control group. Such random assignment underpins the assumption that the groups cannot be distinguished by any common factor other than the application of the intervention to the experimental group. The study is conducted to find out whether there is a statistically significant difference related to the variable in question between the group having had the experimental intervention and the control group. If there is, the study provides support for a claim that there may be a causal connection between the intervention being tested and the variable in question. This approach attempts to explain why something is as it is by employing methods and research designs that resemble those of laboratory experiments. PUTTING IT INTO PRACTICE PUTTING THE THINKING BEHIND THE RCT INTO PRACTICE To determine how a new type of shoe insole affects the development of foot corns and calluses, 100 trial participants can be randomly separated into equal groups of 50— the experimental group and the control group. All participant’s feet are then examined
Chapter 5 • Qualitative and Quantitative Approaches to Designing Research   103 to map the pretrial prevalence of corns and calluses in both groups. During the course of the experiment, the experimental group is treated in one way (they start wearing the new type of insole), while the control group remains as it was prior to the experiment (they make no changes to their shoes). After the experiment, you compare the experimental group to the control group to see if there are differences between the two groups related to the characteristics in question (i.e., you compare the prevalence of corns and calluses across the two groups). If the groups are different after the experiment (i.e., the prevalence of corns and calluses has changed more in one group than the other), you know that the new type of insole is in fact related to the development of foot corns and calluses. Moreover, assume your research design includes making sure that during the experiment the participants act in exactly the same way as they did prior to participating, for example, use the same socks, walk the same distances, in the same type of shoes. Then, you may argue that the observed differences in prevalence of corns and calluses across the two groups after the experiment can be attributed to the effect of the treatment (i.e., wearing the new type of insoles), and not something else. In other words, you can conclude that in all probability, the treatment caused the characteristics in question to become different between the two groups. Quantitative Approaches Vary in the Methods That They Use Within quantitative approaches, there is a variety of specific methods that can be used to collect the data, as well as a variety of statistical procedures that can be used when analyzing that data. Which methods and which statistical procedures you choose to incorporate in your research design will depend on your research questions or hypotheses. For example, quantitative approaches often involve the use of some sort of quantitative survey as a data collection tool. However, not all surveys are the same. Therefore, even when you have decided that you will use some sort of survey, you will still have to decide which type of survey you will use. If you want to make inferences about a specific characteristic to a broader population, such as the population of all individuals eligible for a vaccine during a specific time period, you will need a survey design reflecting that aim. Alternatively, if you want to know about how often a particular income related characteristic (e.g., high or low income) occurs together with opting out of an offered vaccine program, then your survey must be designed to enable such findings, that is, what goes with what or the correlations between the variables of interest (income and nonparticipation in the vaccine program) in your study population. Moreover, a survey conducted to test whether empirical data supports a hypothesis is different from a survey conducted to describe the state of things in a population (with no a priori assumptions about the state of things). The difference between all these types of surveys is what the data obtained by them will enable you to say something about after that data has been analyzed. A survey conducted to test whether empirical data supports a hypothesis will be constructed around the variables that that hypothesis is about. On the other hand, a survey designed to describe the characteristics of a population will collect a wider set of data than just data related to variables making up a hypothesis. Therefore, which type of survey you use depends on the aims of the study. Capturing the Variety in Quantitative Approaches We have tried to capture the idea of the diversity in quantitative approaches, at the level of what the different approaches enable you to say something about, in Figure 5.1 below. We are not suggesting that these are all possible approaches to quantitative research, but
FIGURE 5.1 ■ Examples of Diversity in Quantitative Approaches 104  Research Design
Chapter 5 • Qualitative and Quantitative Approaches to Designing Research   105 simply using the diagram to demonstrate the diversity of research approaches that fits under the umbrella term quantitative approaches. What Figure 5.1 illustrates is that there are cascading layers of decisions to be thought through, and then made, when thinking about designing research using quantitative approaches. All of these decisions connect to the purpose of your research—your research questions and the issue your research is being designed to address. You will need to think through these layers when developing your research design. The box below provides you with some tips for how you might begin to do this. TIP NAVIGATING THE DIVERSITY OF QUANTITATIVE APPROACHES WHEN DESIGNING YOUR RESEARCH Given the variation that exists between, and within, quantitative research approaches, if you are employing some sort of quantitative approach in your research, then you will need to think through and be able to justify all the decisions you make related to choosing that approach. For example, • • Why use a quantitative approach as the organizing construct for your study? • • What do you base that decision on? • How will your answers to these questions shape your research design? What type of quantitative research will be appropriate for addressing what you want to know about, and be able to say something about, at the end of the study? If you are using a survey, what type of survey will you use to collect your quantitative data and why? Being able to answer these questions, and justify your answers, is an important part of ensuring the credibility of your quantitative research. Just stating that your research is a quantitative study, or a particular type of quantitative research such as a survey study, is not enough. Qualitative Research Approaches as Diverse Strategies of Inquiry There is a great deal of diversity among qualitative research approaches and the strategies of inquiry that they employ. This makes it difficult to come up with a simple definition of the field of qualitative inquiry, or an overview of the strategies of inquiries that qualitative approaches can use, given the breadth of the field. One way to capture, convey, and get a sense of this diversity is the idea of different territories (Tracy, 2020) within the field or landscape of qualitative research. There is a common territory that all qualitative approaches share. This is a territory demarcated by the common features of qualitative research approaches that we mapped out earlier in this chapter. Studies that draw on these common features are referred to as types of a “basic” form of qualitative research (Merriam & Tisdell, 2016, p. 23) or “a kind of generalized qualitative research” (Yin, 2016, p. 66). Such studies are not basic because in some way they simplify the principles of qualitative inquiry. Rather they are basic or generalized because the research design of the studies draws on basic, or general, methodologically derived understandings and principles germane to all qualitative research approaches. Examples of such studies would be designing a research study using basic qualitative principles to inform the
106  Research Design use of interviews or observations “to answer concrete program and organizational questions” (Patton, 2002, p. 145). For example, the study by Cheek and Ballantyne (2001) used basic qualitative principles to inform the use of interviews to answer questions about how older people and their family members perceived, and experienced, the move of the older person from home to acute care, and then to an aged care facility: This exploratory, descriptive study examined the search and selection process for an aged care facility following discharge of a family member from an acute setting. Few studies have examined this process and its effects on families. Individuals from 25 families where a family member had been recently admitted to an aged care facility following discharge from an acute setting were interviewed. This article reports participants’ perceptions of the search and selection process and its effect on the family. Five major themes emerged from the data: good fortune, wear and tear on the sponsor, dealing with the system, urgency, and adjusting. The results can be used to inform and assist families and health professionals working with families in this situation. (Cheek & Ballantyne, 2001, p. 221) More Specialized Forms of Qualitative Research However, there are also approaches within the field of qualitative research that draw on more specialized (Yin, 2016) strategies of inquiry. You may find it useful to think of each of these specialized approaches to qualitative inquiry as specific or specialized territories within the general territory or field of qualitative inquiry. Each specific territory, or specialized type of approach, has its own theoretical and methods related traditions. Each is also “connected to a complex literature; each has a separate history, exemplary works, and preferred ways for putting the strategy into motion” (Denzin & Lincoln, 2018b, p. 21). This means that different specialized approaches to qualitative research will be designed differently to reflect those theoretical and methods related traditions. It also means that different qualitative approaches will be more suited to some types of research questions than to others. Ethnography as an Example of a Specialist Qualitative Approach For example, consider the specialist qualitative approach known as ethnography. This is an approach that has been central in many disciplines and fields of study—most notably the fields of anthropology and sociology, but more recently fields and disciplines as diverse as business, organizational studies, nursing, and education. It is one of the foundational methods of qualitative inquiry. The aim of an ethnographic study is for the researcher to immerse themselves in the setting being studied in order to understand the culture of that setting from the point of view, understandings, or perspectives of those being studied. You will see this referred to as taking an emic view. The idea of an emic view is captured well by Spradley (1979) when he wrote, I want to understand the world from your point of view. I want to know what you know in the way that you know it. I want to understand the meaning of your experience, to walk in your shoes, to feel things as you feel them, to explain things as you explain them. Will you become my teacher and help me understand? (p. 34) It is the focus on the theoretical idea of culture that makes a study ethnographic. Put another way, “to be an ethnographic study, the lens of culture must be used to understand the phenomenon” (Merriam & Tisdell, 2016, p. 31) that is being studied. Culture can be
Chapter 5 • Qualitative and Quantitative Approaches to Designing Research   107 understood as “the various ways different groups go about their lives and . . . the belief systems associated with that behavior” (Wolcott, 2008, p. 22). When designing an ethnographic research study, this lens must be added to the basic or general methodological considerations that underpin and make up any form of qualitative inquiry. Ethnographic research involves the researcher becoming immersed in the culture of interest (sometimes referred to as undertaking fieldwork) over a period of time. During this time the researcher uses data derived from observations, interviews, documents, and other texts such as pictures and image, to build a “thick description” (Geertz, 1973) of the culture that the research is designed to provide in-depth understandings of. For example, the culture of an institution/organization such as a prison (Cashin et al., 2010; Wacquant, 2002), or a group within society such as a sporting club (Bolin & Granskog, 2003), or a particular ward or setting in a hospital (Bjerknes & Bjørk, 2012). Just as there is a great deal of variety among qualitative approaches to inquiry generally, there is also a great deal of variety in research approaches that call themselves ethnographies. For example, you may come across studies described as critical ethnography, feminist ethnography, institutional ethnography, focused ethnography, and auto-ethnography. Each type of ethnography varies in the theoretical understandings that underpin and shape the overall ethnographic study design. Activity Taking a Look at Ethnography in Action Obtain a copy of the study by Bjerknes and Bjørk (2012), Entry into Nursing: An Ethnographic Study of Newly Qualified Nurses Taking on the Nursing Role in a Hospital Setting. Read it and then answer the following questions: 1. Why was this study suited to an ethnographic qualitative research approach? 2. What culture was being studied? 3. What types of data were collected and why were they appropriate? 4. Did the study take an emic view—what do you base your answer on? Remember, even though your field of study may not be nursing, you should be able to make decisions about this study in terms of its claims to be an ethnography. Discourse Analysis—Another Form of Specialized Qualitative Inquiry However, not all research questions lending themselves to some sort of qualitative strategy of inquiry will require an ethnographic approach. If, for example, you want to know something about how print-based and digital media have influenced and shaped societal attitudes to understandings of risk related to vaccines during the COVID-19 pandemic, you may choose to employ a different form of specialist qualitative inquiry known as discourse analysis. Discourse analysis is premised on the understanding of “language as a meaning constituting system which is both historically and socially situated” (Cheek & Rudge, 1994, p. 59). Therefore, the focus of analysis is the meaning constituting systems in texts generated from, for example, interviews, news articles, or visual texts such as, pictures and films (Taylor, 2013). How did these texts come to be the way that they are in terms of the meanings that they convey, for example, about risk and vaccines, and what sustains the understanding that they produce about that risk? When designing qualitative discourse
108  Research Design analysis studies, this focus or lens must be added to the basic or general methodological considerations that underpin and make up any form of qualitative inquiry. Like other qualitative approaches, “discourse analysis is not a unified, unitary approach” (Cheek, 2004, p. 1144). It “does not refer to a single approach or method . . . [rather it] . . . refers to a range of approaches in several disciplines and theoretical traditions” (Taylor, 2013, p. 1). As a strategy of inquiry, discourse analysis is understood in different ways by different groups of researchers. This is influenced by the theoretical and methodological lens a researcher uses when thinking about what discourse analysis is and how it might be put into practice. Which approach or method is used by a researcher depends on the theoretical assumptions made by a researcher about what discourse is and what a discourse analysis might involve. Thus, it is possible to design discourse analysis in very different ways.16 For example, discourse analysis drawing on traditions drawn from the area of linguistics will vary from discourse analysis drawing on Foucauldian theory both in focus (i.e., the types of questions about discourse that guide the study) and design (i.e., how that study will be carried out). Thus, “Discourse analysis will always be, because of its interdisciplinary origins, a multiperspective approach with different emphases and understandings in use depending on the position adopted by the researcher employing the approach. There cannot be “the” set of rules for discourse analysis” (Cheek, 2004, p. 1148). TIP WHERE TO START NAVIGATING THE FIELD OF DISCOURSE ANALYSIS A good starting point to see what a qualitative study using a discourse analysis approach involves, and how it can be designed, is Chapter 3 in Stephanie Taylor’s book What Is Discourse Analysis? (2013). In this chapter, she discusses four examples of research published that has used some form of discourse analysis as the organizing lens for the study. At the end of each example, she provides a summary about what we can learn from this example about discourse analysis and designing research using this strategy of inquiry. Another starting point is the book Doing Discourse Research: An Introduction for Social Scientists by Reiner Keller (2013), which provides a thorough introduction to the basic principles of discourse research and provides practical strategies for doing discourse analysis. Part of Chapter 2 in the book, pages 13 to 32, is devoted to a discussion of different approaches to discourse analysis. How Many Specialist Types of Qualitative Inquiry Are There and What Are They? There is no agreement about how many types of specialist qualitative research there are. For example, Yin (2016) identifies 12 frequently cited specialized types or variants of qualitative research that can be distinguished from one another in terms of the differences in the theoretical and methodological lenses that shape the strategies of inquiry that make up their design. These are action research, arts-based research, autoethnography, case study, critical theory, discourse analysis, ethnography, ethnomethodology, grounded theory, narrative inquiry and life history, oral history, and phenomenology (Yin, 2016, pp. 68–70). Each reflects a specific way of thinking about what will be studied and how.
Chapter 5 • Qualitative and Quantitative Approaches to Designing Research   109 What this highlights is that the theoretical position of a qualitative researcher undertaking a study using any specialized type or variant of qualitative research will affect the shape that the specialized strategy takes. This means that if you do choose to use one of the specialist types of qualitative inquiry, or call your study an ethnographic study, discourse analysis, grounded theory, or any other specialist name, you will need to read up and learn more about that specialist approach before you design your research. This means that “you should have carefully reviewed the specialized literature, incorporated key concepts, and emulated the specialist methods in doing your study” (Yin, 2016, p. 67). TIP APPLYING A QUALITATIVE LENS TO ONLINE SOCIAL CONTEXTS It is also possible to view the internet, and the various online environments related to it, as a place in and of itself to be studied, a specific type of virtual social context to which a qualitative lens can be applied (Markham, 2018). For example, some qualitative study designs are forms of ethnographies of the spaces and contexts created in this virtual world (see Boellstorff et al., 2012). In this type of research, you will choose to observe online environments, sites, or groups that are information rich in terms of the information you seek to address your research questions. New and different questions arise about your role of observer and the extent to which you participate in those sites. Capturing the Variety of Qualitative Approaches Denzin (2008, 2010) uses the metaphor of the big tent to capture the idea of the diversity of qualitative approaches. The tent is not a fixed size that limits the possibilities for what can be known as qualitative research. Rather the tent can be made bigger as qualitative inquiry grows and develops. There is room for many different approaches to qualitative inquiry in this tent. In Figure 5.2, we have tried to capture the idea of such variation within qualitative approaches by building on and adapting Merriam and Tisdell’s (2016, p. 42) figure of Types of Qualitative Research. Our additions, drawing on Yin 2016, are in gray. When reading this diagram, keep in mind that we are not suggesting that these are the 14 approaches to qualitative research, or the only way we might think about types of qualitative approaches. We are simply using the diagram to demonstrate the diversity of research approaches that fits under the umbrella term qualitative approaches. It is important to remember that every one of the approaches in Figure 5.2 has its own theoretical understandings and focus. Further, each approach to qualitative research has accepted strategies of inquiry and ways of putting those strategies into practice. Therefore, how you actually design and put that research design into practice if you are using grounded theory will be different than if you are using discourse analysis or ethnography. It is beyond the scope of the focus of this book, which is research design, to discuss each of these approaches. Why we mention them is to point out that you will come across these terms, and if you are going to name your study as “a [. . .] specialist research design” (e.g., a ethnographic study, or a discourse analysis, or a grounded theory, or any other type of specialist qualitative research), then you have another whole series of design considerations to make. These considerations are related to the effect that the theoretical and methodological understandings the particular specialist approach has on the way you will design your research.
110  Research Design As we emphasized previously, this means that you will need to read a lot about the specific type of qualitative research approach you have chosen to use in your research and make sure that your study is in keeping with the accepted principles of that approach. Like your initial choice of a qualitative approach, the type of qualitative approach that you choose to use will need to be justified by demonstrating why it is an appropriate choice in light of the purpose of your research and how using that specific approach will enable you to achieve credible research findings. This is particularly so given the variety that exists within each type of qualitative inquiry. FIGURE 5.2 ■ Diversity in Qualitative Approaches Activity Mayan’s Example of the Coffee Shop Mayan (2009) points out that “[e]thnographers, grounded theorists, phenomenologists, and discourse analysts, for example, can observe the same phenomenon but will naturally ask different questions of it” (p. 35). She captures this idea well using the everyday example of qualitative researchers in a coffee shop that focus on different things when they are in that coffee shop. For example, the ethnographer asks, “What is coffee shop culture?” (p. 35). To address this question, the ethnographer watches
Chapter 5 • Qualitative and Quantitative Approaches to Designing Research   111 how people behave in the coffee shop, for example, what they do when they order their coffee, what type of coffee they order, and where they drink that coffee and who with. Are there certain rules or norms of behavior in the coffee shop that regulate behavior in that space? For example, how people serving coffee act, how people ordering coffee act, and what is acceptable behavior and what is not. On the other hand, the qualitative researcher who is a discourse analysist focuses on the language being used in the coffee shop context and its effects. For example, they may “likely look for the power embedded in more elite or obscure coffee language—‘I’ll have an el grande, decaf, mocciachino, no fat, no foam’—versus the more plain language (‘I’ll have a regular coffee’)” (Mayan, 2009, p. 35). Who does the elite or obscure language include and who does it exclude? How might this work to produce and sustain types of customers in that coffee shop? How might it create an impression of who belongs and who does not? Now, choose another form of specialist qualitative approach from Figure 5.2, find out about it, and try to work out what they might look for and ask about related to the social setting called “a coffee shop.” If you are finding this particularly difficult, Mayan provides two more examples you can refer to. Summing Up: Design Considerations in Light of the Variety Within Qualitative Research What you will need to think through, and make decisions about, when you are designing qualitative research is not whether there are 12 or 14 types, or variants of, qualitative inquiry, or more or less. Rather, being aware that there is a great deal of difference possible between and within research designs and approaches that are described as qualitative, what you will need to consider, and make decisions about, is which qualitative approach might be most appropriate for addressing your research questions and why. When doing so, it will be important to keep in mind that the fact that there is this diversity of approaches does not mean that “anything goes” when designing your qualitative inquiry. Each approach to qualitative research has accepted strategies of inquiry and ways of putting those strategies into practice. There are some common understandings that all qualitative approaches draw on, and in order to be credible, your research design will need to be in keeping with them. There are also some common understandings pertaining to types of specialist types of approaches to qualitative research. Your research design will need to reflect those understandings and show evidence of you putting them into action in the methods you use, and the way that you analyze your data. For now, what we want to highlight is that given the great deal of variety that characterizes the field of inquiry known as qualitative research, there are cascading layers of methodological decisions to be thought about, and then made, when designing research using qualitative approaches. All of these decisions connect to the purpose of your research— your research questions and the issue your research is being designed to address. You will need to think through these layers when developing your research design. The box below provides you with some tips for how you might begin to do this.
112  Research Design TIP NAVIGATING THE DIVERSITY OF QUALITATIVE APPROACHES If you are in the process of designing, or intend to design, your research using a qualitative approach of some kind, ask the following questions of that design: • • Why use a qualitative approach in the first place? • • What do you base that decision on? • How will your answers to these questions shape your research design? What type or variant of qualitative research will be appropriate for addressing what you want to know about, and be able to say something about, at the end of the study? What tradition or school of thought will you draw on within that type or variant of qualitative research and why? You will need to be able to justify your answers to these questions when writing about the way that you designed your research. This will contribute to the trustworthiness and credibility of your research. Just stating that you are doing, or that your research is, a qualitative study is not enough. What type of approach and strategy of inquiry are you using and why? WHICH ARE BETTER: QUALITATIVE OR QUANTITATIVE RESEARCH APPROACHES? With such variation and fluidity in research approaches and the type of data that they produce, it should not surprise you that the relative worth of different types of data, and the methods used to obtain it, can become a very debated and contested area. For example, each year when we are teaching a research design class, inevitably questions arise from students in the class related to which type of data or research approach, quantitative or qualitative, is the “best” or the “most scientific.” These questions are a version of the long-standing debates about what science is, and the onto-epistemological understandings on which that view is premised. As we saw in Chapter 4 when discussing inquiry paradigms, in turn this leads to debates about ways that science can (even must) be done. This in turn raises the question of which methodologies and associated methods are scientific and which are not. Often this debate is reduced to a discussion of “the relative merits of quantitative/experimental methods versus qualitative /naturalistic methods” (Patton, 2015, p. 88). The Activity box below provides a good example of this. Working through this example provides insights into what this debate about which approach is better is all about, and how it plays out in practice. Activity Exploring a Recent Example of the Quarrel of “Which Are Better—Qualitative or Quantitative Approaches to Research?” In 2015, the British Medical Journal (BMJ), a well-respected high impact factor peer reviewed journal, stated that publishing qualitative studies was an “extremely low priority” for the BMJ. The rationale given for this decision given by BMJ editors was that
Chapter 5 • Qualitative and Quantitative Approaches to Designing Research   113 our research shows that they are not as widely accessed, downloaded or cited as other research. We receive over 8000 submissions a year and accept less than 4%. We do therefore have to make hard decisions on just how interesting an article will be to our general clinical readers, how much it adds, and how much practical value it will be. (from “Excerpt from rejection letter tweeted by McGill Qualitative Health Research Group (@MQHRG), September 30, 2015” in Greenhalgh et al., 2016) What this in fact was saying was that “no matter how good the qualitative inquiry might be, its methodology, in effect, automatically precluded it from being published in The BMJ” (Cheek, 2018a, pp. 51–52). In other words, it was a decision made about what science is, can and cannot be, and what evidence is, can and cannot be, based on methodological and onto-epistemological assumptions. The effect of that decision was to constrain the way in which research could be thought about and designed methodologically. In turn, this effectively meant that certain types of research questions and problems were not able to be reported in the BMJ since the way of addressing them was deemed not scientific. They should be published elsewhere. Find out more about the BMJ example in Cheek (2018a) “The BMJ debate and what it tells us about who says what, when and where, about our qualitative inquiry.” When doing so, think about how this debate could emerge in the first place, and why it is important to understand this. The example in the Activity box reflects the contestation that arises from questions and views about whether a particular type of method, and the data produced by using it, can be considered “scientific” and able to provide credible research evidence. It is a version of while “subjective understandings may be of very great importance in our lives . . . they constitute an essentially different kind of knowledge from scientifically established facts” type of thinking (Crotty, 1998, p. 27). As Guba and Lincoln (1994) point out, Historically there has been a heavy emphasis on quantification in science. Mathematics is often termed the “queen of sciences,” and those sciences, such as physics and chemistry, that lend themselves especially well to quantification are known as “hard.” Less quantifiable arenas . . . particularly the social sciences, are referred to as “soft.” . . . Scientific maturity is commonly believed to emerge as the degree of quantification found within a given field increases. (pp. 105–106) Therefore, it comes as no surprise that some researchers, and editors of journals, reject “qualitative” or nonnumerical data on the basis of it not being scientific or producing “hard” data. For example, MacInnes (2019) writes “Numerical data is an essential way to present clear and concise evidence for any argument. Be skeptical of arguments made without numbers!” (p. 117). However, what this statement ignores is that just having numbers as data does not make something science, or better science than nonnumerical or qualitative data. Why numbers have been used as data, what numbers have been used as data, how those numbers have been determined and then analyzed in relation to the context of the
114  Research Design research design and what the research question is, is what gives a quantitative study validity and rigor, not the fact that the data is in the form of numbers. For as MacInnes himself (2019) points out, just using numbers does not make data stronger or better because, “like any other kind of information, numerical data can be excellent or poor quality, and can be used well or badly” (p. 11). Perhaps then, it would be more accurate to state that “[n]umerical data is can be an essential way to present clear and concise evidence for any argument specific research problems or questions. Be skeptical of arguments made without with numbers until you have ascertained the validity of what those numbers are being used for to claim!” (MacInnes, 2019, p. 117—our strikethroughs and our additions in italics). The point here is that what makes numbers “good” data is if they are produced according to accepted principles for quantitative approaches to research and they provide the kind of information needed to answer the research problem or questions.17 The same point applies equally well to qualitative data—namely that this data “can be excellent or poor quality, and can be used well or badly” (MacInnes, 2019, p. 11). Whether it is excellent or poor depends on if this data is produced according to accepted principles for this type of research and data generation and provide the kind of information needed to answer the research problem or questions.18 The question then is not what the “best” form or type of data is—quantitative numbers or qualitative words. Rather the question to ask yourself, and think through, is what type of data will enable you to make the interpretations will you need to make in order to answer the research questions or problem that you want to be able to say something about at the end of the study. Equally importantly, you will also need to ask yourself what won’t you be able to say something about, and does this matter? Why or why not? Therefore, the answer to the question of which is “better”—qualitative or quantitative research approaches—is “it depends,” because this question can only be answered in relation to a specific research design and a specific research problem and questions. We agree with the renowned quantitative researcher Bram Oppenheim (1992) when he states that “choosing the best design or best method is a matter of appropriateness. No single approach is always or necessarily superior; it all depends on . . . the type of question to which we seek an answer” (p. 12). CONCLUSIONS Quantitative and qualitative approaches differ from one another because of the different methodological thinking that they draw on and which affects the way that these approaches are put into practice when designing and conducting research. This includes the way we think about the purpose of the research + the type of data we will need to achieve that purpose + the way we can obtain that data + how we make judgments about the credibility of the research + if the type of information that the research produces can be considered scientific. Therefore, it is important not to detach quantitative and qualitative approaches to research from the methodological thinking and assumptions that shape those approaches. For example, it is not the fact that data is in the form of numbers in the case of quantitative research, or in the form of words in the case of qualitative research, that makes a study quantitative or qualitative in approach. What does, is why the data takes that form, what numbers or words are collected from whom, how and why, and
Chapter 5 • Qualitative and Quantitative Approaches to Designing Research   115 how these numbers and words are analyzed and interpreted. Therefore, defining qualitative and quantitative approaches cannot be reduced to collections of simplistic dichotomies, often in the form of comparative dot points such as words vs numbers or inductive versus deductive. In all of this, it is important to remember that there is a great deal of diversity in approaches to research that fit under the umbrella terms of qualitative research or quantitative research. There are various approaches within qualitative or within quantitative research that differ in terms of the type of questions they focus on, and therefore the methods that they use in their research design. Therefore, deciding that a qualitative or quantitative approach to research is an appropriate approach for the research that you are designing is just the first of a cascade of interrelated decisions that you will need to make about that design. These are decisions related to putting the approach that you have chosen into practice and will have to be thought through regardless of whether you are using qualitative or quantitative approaches. Making these decisions will require you to focus your thinking at the level of the methods that you will use in your research design. Such thinking will require reflexive conceptual and intellectual work at the level of how quantitative or qualitative data will actually be collected and what you will need to think about when doing so. For example, what method(s) will you use to collect qualitative or quantitative data and why? Once you have decided on those methods, what shape will those methods take? What will you need to decide about them to be able to put them into practice? For example, who will you collect data from and why? How will you organize and analyze that data? While answers to these questions will be different in qualitative and quantitative approaches, such incremental, iterative, and connected thinking needs to be done no matter which of these research approaches, or methods associated with them, you are using in your research design. In Chapters 6 to 9, we build on the discussion that we have begun here about qualitative and quantitative approaches to research. In these chapters, the focus of our discussion is at the level of methods and the way they are put into practice in qualitative research approaches (Chapters 6 and 7) and quantitative research approaches (Chapters 8 and 9) when collecting and analyzing data. SUMMARY OF KEY POINTS • Qualitative and quantitative are umbrella terms, each comprising a variety of approaches to research. • Quantitative and qualitative approaches differ from one another because of the different methodological thinking they draw on. • This methodological thinking affects the way these approaches are put into practice when designing and conducting research. • Defining qualitative and quantitative approaches cannot be reduced to collections of simplistic dichotomies such as words versus numbers, inductive versus deductive.
116  Research Design • What makes research qualitative or quantitative includes why the data takes a specific form, what numbers or words are collected from whom, how, and why, and how these numbers and words are analyzed and interpreted. • There are different types of quantitative research. They vary in the type of research question they are able to answer. • For example, some quantitative approaches enable us to describe what is going on in a group of people in a situation (descriptive quantitative approach), others enable us to make links between aspects of what is going on (correlational quantitative approach), and yet others enable us to say why those links exist (experimental/ quasi-experimental approach). • There are different types of qualitative research. Some qualitative studies use more specialized approaches where a particular methodological or theoretical approach provides a lens for the entire study, for example, ethnography or discourse analysis. • Which type of qualitative or quantitative approach you choose to underpin your research design depends on the type of knowledge that you will need to address what it is that your research is designed to find out more about. • Neither qualitative nor quantitative approaches are innately better than the other. Which approach is better depends on what type of data will enable you to make the interpretations you will need to make in order to answer the research questions that you want to be able to say something about at the end of your study. • Deciding that a qualitative or quantitative approach to research is an appropriate approach for the research that you are designing is just the first of a cascade of interrelated decisions that you will need to make about that design. KEY RESEARCH-RELATED TERMS INTRODUCED IN THIS CHAPTER correlational approach descriptive quantitative approach discourse analysis ethnography experimental/quasi-experimental approach hypothetico-deductive thinking qualitative approach quantitative approach SUPPLEMENTAL ACTIVITIES 1. Find two published reports of research, one of which reports using qualitative approaches or describes itself as a qualitative study of some kind and one that uses quantitative approaches or describes itself as a quantitative study of some kind. And ask these questions of each article. Do the authors make it clear • Why they used a qualitative or quantitative approach? • What types or variants of qualitative and quantitative approaches were used in the respective reports? • What tradition or school of thought the authors have drawn on within that type or variant of the chosen approach, and why?
Chapter 5 • Qualitative and Quantitative Approaches to Designing Research   117 • Why those approaches were considered appropriate for addressing what the researchers wanted to be able to say something about at the end of the study? The answers to these questions are part of establishing the credibility of that reported research. 2. If you are in the process of designing research and have chosen either a qualitative or a quantitative approach, you will need to think about how to put that approach into practice. Try to write answers to the following questions: • Why have I chosen a qualitative or quantitative approach for this study? • What type of qualitative or quantitative approach am I using, and why? • What method(s) will I use to collect the qualitative or quantitative data, and why? • Once I have decided that, what else will I need to decide in order to be able to put those methods into practice? It may be that you are not able to write much about dot points 3 and 4 at this stage. Therefore, keep what you have written and progressively add to, or modify, it as you read Chapters 6 through 9. When doing so, also write down why you made the additions or changes you did. At the end of this reflexive process, you will have developed a large part of the methodology and methods sections of your research proposal in terms of what you will do, how, and why. FURTHER READINGS Cheek, J., Onslow, M., & Cream, A. (2004). Beyond the divide: Comparing and contrasting aspects of qualitative and quantitative research approaches. Advances in Speech-Language Pathology, 6(3), 147–152. Gorard, S. (2003). Quantitative methods in social science. Continuum. Tracy, S. J. (2020). Qualitative research methods: Collecting evidence, crafting analysis, communicating impact (2nd ed.). John Wiley . See especially Chapters 1, 2, and 3. NOTES 1. This point is picked up and developed in Chapters 6 and 7. 2. This point is picked up and developed in Chapters 8 and 9. 3. Uptake: The proportion of the eligible population who accepted a vaccine offered during a specific time period. 4. At the time of writing this chapter, this is happening in relation to media reporting about the COVID-19 vaccine known as Astra Zeneca. 5. We already have discussed aspects of this thinking and the way it affects the development and form of research questions in Chapter 3—see the section “Deductive Reasoning.” Chapters 8 and 9 have an extended discussion of this type of thinking. 6. Clifford Geertz (1973), who drew on Ryle’s (1971) earlier work, is largely credited with the introduction of the term thick description into qualitative research. Geertz’s original conception of thick description, which was anthropologically based, was more
118  Research Design descriptive in orientation than contemporary uses of the term, which have emphasized an interpretive dimension to this description. 7. See Chapter 6 on probes and interview guides. 8. We return to take a closer look at this analytical process in Chapter 7. 9. See Chapter 4 for a discussion of inquiry paradigms. 10. See Chapter 8 for a detailed discussion of this. 11. However, the idea of the researcher as a distant impassive observer is a simplification of the reality of research. For example, the questions in a questionnaire are not a priori given; neither are they the only possible questions to ask. Rather, they are a result of qualitative choices made by the researcher when developing their theoretically or empirically derived understanding of the variables included in the study. This point will be developed in Chapter 9 of this book. 12. See Chapter 7. 13. We take a close look at this process in Chapter 7. 14. We return to the idea of trustworthiness, and how to make decisions about it, when discussing qualitative analytic strategies in Chapter 7. 15. In Chapter 8, we provide a more extensive discussion about different types of quantitative approaches and their potential to address different types of research questions. 16. Keller (2013) and Taylor (2013) provide excellent overviews of the wide variation in qualitative discourse analysis approaches arising from the different types of theoretical perspectives about discourse used in them. 17. This point is developed in Chapters 8 and 9 where we take a closer look at designing, and using, methods associated with what is referred to as quantitative research. 18. This point is developed in Chapters 6 and 7 where we will take a closer look at designing, and putting into practice, methods associated with what is referred to as qualitative research.
6 OBTAINING DATA USING QUALITATIVE APPROACHES PURPOSE AND GOALS OF THE CHAPTER The purpose of this chapter is to explore the series of research design–related decisions you will need to make when collecting data using qualitative methods. It is the first of four interconnected chapters (Chapters 6, 7, 8, and 9) which explore how to put the methodological thinking associated with, and underpinning, qualitative and quantitative research approaches into practice. In this chapter, our focus is what you will need to think about when making choices about the type, and form, of methods that you will use to collect qualitative data. After reading it, you will know a lot more about what you will need to find out about, think through, and decide upon before you are able to put any qualitative method into practice when collecting data. There are a number of ways that data can be collected or obtained using qualitative methods. These include interviewing people about what you want to find out more about; observing what is happening in a social context of interest related to your research problem; and analyzing written and visual texts such as documents, diaries, minutes of meetings, photographs, and other images. When designing your research, you will need to think about, and decide, which of these methods will form part of your design. You will need to be able to justify your decision for your research to be credible. However, even when you have decided which of these methods might enable you to obtain the data you need to address your research problem, your method-related decision-making is not yet finished. This is because there is a great deal of variety within each of these types of qualitative methods. Qualitative interviews do not all take a standard form. Neither do qualitative observations and textual analyses. Therefore, a key decision you will need to make when designing your research is which form of these qualitative methods you will include in your research design, and why. Throughout the discussion we demonstrate this type of thinking and decision-making in action. When doing so we use the example of qualitative interviews as the primary vehicle for the discussion. We structure the discussion around four questions, the answers to which will affect the form that the qualitative interviews that you use in your study take, as well as how those interviews will be put into practice: • How structured will your qualitative interviews be? • Will you interview your participants individually, or in some form of group? 119
120  Research Design • What will you ask your participants in the interview? • Who will you interview? We highlight how these questions, and the decisions you make about each of them, shape how a qualitative interview is designed and put into practice. There are a range of possible answers to these questions. How you answer each question will be influenced by the theoretical and methodological assumptions you are making in your study. Therefore, it is important to mentally add an and why at the end of each of these questions. This is because thinking about why you have answered that question in a particular way exposes the assumptions, both methodological and theoretical, you are making when you decide to design your qualitative methods in one way and not another. Throughout the discussion, we demonstrate that these types of considerations also shape the form that other qualitative methods take. For example, deciding how standardized or structured observations will be, and why, involves the same type of considerations as deciding how structured qualitative interviews will be. Likewise, deciding which texts you will analyze involves the same type of considerations as deciding who you will interview and/or what you will observe. The goals of the chapter are to • Establish that the choices that you make about the form that any type of qualitative method takes are part of your research design and therefore need to be made explicit. • Emphasize that qualitative methods are about methodical and justifiable ways of both collecting, and analyzing, data. Therefore, qualitative methods are much more than data collection techniques. They cannot be viewed as stand-alone procedures able to be inserted into your research design. • Explain that qualitative methods, like qualitative approaches, are diverse. They cannot be standardized. There is a range of possibilities for how to put any qualitative method into practice when collecting qualitative data. • Highlight that making decisions about what type of qualitative method you will use to collect data requires you to have thought about, and decided on, the type of information you will need to address your research problem or questions. • Illustrate that the form that a specific qualitative method takes results from a series of interconnected considerations about how to obtain the type of data you will need to address your research problem or questions. • Use the example of qualitative research interviews to explore this series of interrelated choices needing to be made when putting qualitative methods into practice. • Demonstrate that, and how, making decisions about those choices is influenced by the theoretical and methodological assumptions you are making in your study. • Act as a guide for what you will need to think through, find out more about, and then decide on, before including some sort of qualitative method to collect data as part of your research design.
Chapter 6 • Obtaining Data Using Qualitative Approaches   121 QUALITATIVE METHODS ARE NOT STAND-ALONE DATA COLLECTION TECHNIQUES We saw in Chapter 5 that there is a great deal of diversity in approaches to research that fit under the umbrella term of qualitative research. Each is “connected to a complex literature; each has a separate history, exemplary works, and preferred ways for putting the strategy into motion” (Denzin & Lincoln, 2018b, p. 21). Given this diversity, it should come as no surprise that there are also a number of ways that data can be collected or obtained using qualitative methods. Decisions about which qualitative method you will use in your research design, and the form that that method will take, results from a series of interconnected considerations about how to obtain the type of data that, when analyzed, can provide the type of information you will need to address your research problem or questions. There is no point in collecting data if you do not have a plan for analyzing it. In qualitative research, data collection and analysis occur iteratively and simultaneously throughout the study.1 Therefore, qualitative methods are about methodical and justifiable ways of both collecting and analyzing data. This is an important part of establishing the trustworthiness and credibility of a qualitative research design and the conclusions that are based on that data collection and analysis. The key point in all this is that qualitative methods are much more than data collection techniques. They cannot be viewed as stand-alone procedures able to be inserted into your research design. The overall research design strategy that you are employing provides the context for, and justification of, why particular methods are part of that design, as well as what form those methods take. Therefore, any qualitative method, and the way that it is put into practice, cannot be understood apart from all the other choices that collectively make up your research design. Consequently, this chapter about research design considerations related to collecting data using qualitative methods must be read in context. This context is the thinking that you have done so far about your overall research design which has led you to conclude that the type of data you will need to address your research problem can be obtained using some form of qualitative method. This means that you a. know what the purpose of your research is, and have a clearly defined research area or problem or question (Chapter 3). b. have thought about what type of information you will need to address the problem that your research is being designed to address (Chapter 4). c. are aware that addressing your research question or problem requires the exploration of one or more aspect(s) of individuals’ perceptions, experiences, and understandings of the matters related to that problem, as well as the social settings that give rise to those understandings (Chapters 4 and 5). d. have decided that the type of information needed can be provided by analyzing and interpreting the type of data produced by a qualitative research approach (Chapter 5). e. have considered the ethical implications of each of the above points and decided that the research can be done in a responsible and ethical manner (Chapter 2).
122  Research Design Having considered, and thought through points (a) through (e) above, you are now ready to focus your thinking on what type of qualitative approaches, and the methods associated with them, might be able to provide you with the information you need to address the problem that your research is being designed to address. TIP USE THE TERM DATA COLLECTION WITH CARE It is important to recognize that there are different points of view about the use of terms such as collecting data or data collection among qualitative researchers. Not all qualitative researchers would be comfortable using these terms. Instead, they might choose to use terms such as generating data, strategies for obtaining data, or even making data. This reflects their view that terms such as data collection can give the impression that data is a “thing” to be “got.” You will recall from Chapter 4 that data is much more than that. It is about the way that parts of reality are reduced or “chunked” into manageable units (Bernard et al., 2017) able to be analyzed and interpreted to produce the findings or results of our study. Therefore, although throughout the chapter we have chosen to use the terms collecting data or data collection, this does not mean that we see data as a thing waiting there for us to find or collect. DIFFERENT QUALITATIVE METHODS USE DIFFERENT STRATEGIES OF INQUIRY There are a range of methods used in qualitative research approaches. Common methods often used to collect data in qualitative inquiry include observations, interviews, and analyses of some form of written or visual texts. Such texts can range from documents such as minutes of meetings, organizational policies, government regulations, through to personal diaries, poetry, photos, or films. There is also a lot of diversity within each of these types of methods. There are different types of interviews, observations, and textual analyses. Each type of interview, observation, or textual analysis employs different strategies of inquiry. Such different strategies of inquiry arise from different disciplinary, inquiry, and theoretical traditions that shape the way that the same method (e.g., an interview or an observation or a textual analysis) is understood and enacted. There is not a standardized form that all interviews or observations or textual analyses take. However, despite their diversity, what all qualitative methods, and the strategies of inquiry they employ, have in common is that they will be designed and used in keeping with the basic principles and features of qualitative research outlined in the previous chapter—principles such as collecting data in a way that enables you to explore aspects of the world of the participants in the study in order to understand the way that they perceive, experience, and make sense of that world. There is no overt experimental or researcher manipulation of that setting. Thus, “People will be performing in their everyday roles, or will have expressed themselves through their own diaries, journals, writing and photography—entirely independent of any research inquiry” (Yin, 2016, p. 9). Therefore, when making any decisions about the methods you will use, and the form that they will take in your qualitative research design, you will also need to think about how the form that those methods will take is in keeping with the basic principles of
Chapter 6 • Obtaining Data Using Qualitative Approaches   123 qualitative research. Such thinking will require reflexive conceptual and intellectual work at the level of how qualitative data will actually be collected, and what you will need to think about when doing so. TIP SOMETHING TO KEEP IN MIND It is possible to conduct interviews, make observations, and analyze texts as part of quantitative studies. However, these will be types of interviews, observations, and textual analyses designed to produce numerical data. Therefore, it is important that you think about what form your method for collecting data takes, what type of data it will produce, and if this is the type of data that enables you to make the types of analyses and interpretations you will need to be able to address your research questions. It is not enough to say that your research design will use interviews or observations or textual analyses as a data collection method. You will need to outline and justify all the choices you make about what type of interviews, observations, or textual analyses that you will use and why. This is an important part of establishing the credibility of your research. Key Questions to Ask Yourself When Choosing Types of Qualitative Methods or Strategies of Inquiry The diversity between and within qualitative methods means that you will need to think through which qualitative method(s), and which form(s) of that method, is the best fit with your research design. These are key questions you will need to ask yourself and think reflexively about: • What method(s) will you use to collect qualitative data and why? • Once you have decided on the methods you will use, what shape or form will those methods take and why? • What will you need to decide about them to be able to put them into practice and why? • For example, who will you collect data from and why? • How will you organize and analyze that data and why? You will notice that after each of these questions we have explicitly added “and why” as part of that question. This is because, as we indicated previously, thinking about why you have answered that question in a particular way exposes the assumptions, both methodological and theoretical, you are making when you decide to design your qualitative methods in one way and not another. Navigating the Diversity Between and Within Qualitative Strategies of Inquiry When Designing Your Research The rest of the chapter explores the thinking that you will need to do about each of these key questions when choosing the methods, and the form that those methods will take, for your research design. It takes the form of an extended discussion about what you will
124  Research Design need to think about when making, and justifying, the decisions about how you will go about putting a qualitative research method (such as observations, interviews, or textual analyses) into practice in order to obtain the type of data that will enable you to answer your research problem or questions. We use the example of collecting data using some form of qualitative interview as the vehicle to drive, and around which to organize, the discussion. We explore what you will need to think through, and make choices about, both before and during collecting qualitative data using some type of interview. Specifically, choices and decisions about • how structured your qualitative interviews will be; • whether you will interview your participants individually, or in some form of group; • what you will ask your participants in the interview; • who you will interview. While these are not the only choices to be made when using some type of interview to collect qualitative data, exploring these choices provides you with good examples of the reflexive thinking and questioning that you will need to do about all the choices you make about the qualitative research interviews you will use in your study design. You will identify further questions to ask yourself as you systematically, and iteratively, think through what you will need to know, and decide on, related to the form your interviews will take. The same type of reflexive thinking applies to the decisions that you will make about the form that any other qualitative method will take. For example, if you have decided to use qualitative observations of some kind in your research design, here are some of the choices you will have to make about those observations: • how structured your qualitative observations will be; • whether you will observe individuals or groups or both; • what you will observe and where; • which individuals or groups or settings will you observe, where, and for how long. Similarly, if you decide to analyze texts in some way, you will need to think reflexively through the same sorts of choices: • What form will your textual analysis take, for example, a more structured content analysis or an unstructured discursive analysis? • Will you analyze one type of text or several different types of texts? • Which specific texts of that type(s) will you choose? Therefore, the discussion to follow will be of use to anyone collecting data using some form of qualitative methods, not just those using interviews. Having established the purpose and context for the discussion to follow, we will begin our exploration of this reflexive thinking in action by exploring the choices you will need to make related to the question of how structured your qualitative interviews will be and why.
Chapter 6 • Obtaining Data Using Qualitative Approaches   125 TIP ANOTHER KEY ISSUE TO THINK ABOUT: HOW MIGHT CONDUCTING EMOTIONALLY LADEN RESEARCH IMPACT YOU AS A RESEARCHER? Collecting qualitative data can have a powerful impact on the researcher. This is particularly so when people are talking about difficult and emotive issues such as losing a child, coping with cancer, or being abused. In her article called “Compassion Stress and the Qualitative Researcher,” Kathleen B. Rager (2005) explores her experience in relation to the effect that researching the self-directed learning of breast cancer patients had on her as a researcher: The participants in my qualitative study of the self-directed learning of breast cancer patients were protected carefully. . . . The question that was not asked as I began my study but that I wish to raise at this time is whether researchers should not also attend to themselves in the process. No thought was given to ensuring that I would take steps to address the impact on me as I conducted such emotionally laden research. (p. 423) It is important to be aware of and not underestimate the impact that entering and interacting with people in their context can have on you when you are researching those contexts. Therefore, if you are a student who is about to collect their first qualitative data, it will be helpful to talk this through with your supervisor and work out a strategy for dealing with any distress that arises. HOW STRUCTURED WILL YOUR QUALITATIVE INTERVIEWS BE AND WHY? Among other things, qualitative interviews vary in terms of how structured the interview is. Therefore, when using qualitative interviews as part of your research design, you will need to make decisions about how structured your interviews will be. Will your research design use fully structured standardized or closed interviews? Or will you use semi-structured or open/unstructured/informal interviews? What will you base this choice on? In what are referred to as structured or standardized or closed interviews, an interview schedule is used and followed “as if it were a theatrical script to be followed in a standardized and straightforward manner” (Fontana & Frey, 2005, p. 702). Therefore, each research participant being interviewed is asked the same predetermined interview questions, using the same wording, in the same order. Usually there is a limited set of response categories where “the interviewer records the responses according to a coding scheme that has already been established by the project director or research supervisor” (Fontana & Prokos, 2007, p. 19).2 This is because the aim in using a closed interview structure is to obtain information from each participant in your research about the same specific aspects of the research problem you are designing your research to address. Your motivation will be to obtain some form of standardized data that can then be compared across large numbers of participants. Closed interviews are common in quantitative survey designs. This is because they are, in many ways, an “oral form of a written survey” (Merriam & Tisdell, 2016, p. 110), and enable you to statistically generalize the answers given to the wider population of interest.3
126  Research Design Not surprisingly, closed interviews are uncommon in qualitative research approaches. The type of data this type of interview enables is not congruent with the rich and in-depth information that most qualitative research is designed to obtain. If used at all in qualitative research designs, closed interviews usually collect some form of demographic data about the participant(s) in the study. Unstructured or open qualitative interviews have a much more flexible and nonstandardized structure. You will see this type of interview referred to as an “unstructured,” “informal,” or “open” interview. In this type of interview, the interviewer “attempts to understand the complex behavior of members of society without imposing any a priori categorization that may limit the field of inquiry” (Fontana & Prokos, 2007, p. 40). The “questions are broad and open-ended in ways that let the participants discuss their own thoughts on the topic” (Morgan, 2016, p. 61). The interview is more a conversation with the researcher picking up on, and probing in more depth, areas that the participant has chosen to talk about when answering those questions. For example, Morgan’s study (1989) of how spouses adjusted to becoming widows had only one question: “What sorts of things made it easier for you to deal with your widowhood and what sorts of things made it harder for you?” (p. 102). He notes, when reflecting on the study 30 years later, that [t]his design made it possible to hear about participants’ experiences in their own terms, and even though there was a clear underlying research question, the point was to learn about how this topic fit into the lives of these women without directing them to discuss the things that interested the research team (Morgan, 2019, p. 8). Choices About Structure Are Choices About the Degree of Control You Have Over the Interview The higher the level of structure of an interview, the more control a researcher exercises over that interview such as what is asked and in what order. If you conduct a completely structured or closed interview (or standardized interview), you, as the researcher, are controlling that interview by directing the person you are interviewing to discuss things that interest you (Morgan, 2019). You are controlling what will be asked, and equally importantly, what won’t be asked or followed up in the interview. This is because you are going to ask each person being interviewed the questions that you have decided on before the interview has begun. No matter what the person being interviewed talks about or the issues they raise, the only questions that you will ask are the ones that you came up with originally. Therefore, the only information you will obtain is about specific aspects of the specific areas that you have deemed important before you began the interview. A less structured interview enables you to obtain in-depth and rich information about the experiences and understandings about the situation from the point of view of the person being interviewed. This type of interview uses an interview guide rather than an interview schedule. An interview guide is “a flexible list of questions to be asked during the interview, which are meant to stimulate the discussion rather than dictate it” (Tracy, 2020, p. 178). In other words, an interview guide is just that, a guide. It gives you as the researcher flexibility to respond and engage with what the person has said. This makes it possible for you to find out about things which you may not have thought about before that interview and would now like to know more about.
Chapter 6 • Obtaining Data Using Qualitative Approaches   127 In this way, the more open or unstructured interview can enable an active exchange of ideas that produces a depth and richness of data in a way that a closed interview structure cannot. It thereby can enable much richer in-depth information to be obtained. In this type of interview, it is more of an interview where both the researcher and the person being interviewed shape the form that the interview takes. The researcher does not control the entire interview. In reality, most interviews fall somewhere on a continuum between fully closed or structured and fully unstructured or open interviews (Morgan, 2016; Merriam & Tisdell, 2016). Therefore, they are described as semistructured. A semistructured interview gives some structure to what is to be talked about but does not dictate in what order, form, or how it must be talked about as a more closed interview structure does. The semistructure does not preclude the possibility of exploring and asking questions about what was said. This enables you to follow up and probe unanticipated and interesting directions and areas that may arise during the interview. Therefore, the semistructured interview guide “is meant to stimulate discussion rather than dictate it” (Tracy, 2020, p. 158). There is flexibility in the way that the interview guide can be used and the interview conducted. Both semistructured and open interviews enable rich, in-depth information to emerge. Participants’ “own words can be captured by the method and hence the researcher can focus on issues that are important to the participants. . . . It provides opportunities for the researchers to probe and explore in great depth, and to follow up clarification immediately” (Liamputtong, 2013, p. 71). Therefore, you will find both semistructured and open interview formats used extensively in qualitative research. When you are designing your research, you will need to think about how structured your qualitative interview will be and justify the decisions that you make in this regard. This will require you to ask yourself questions such as, Will your research design use fully structured, standardized, or closed interviews? Or will you use semistructured, or open, unstructured, or informal interviews? What will you base this choice on? And is that choice methodologically consistent with the qualitative approach framing your study, and in turn the inquiry paradigm and theoretical assumptions framing that qualitative approach? Reflexively thinking through questions such as these will require you to consider the purpose of the interview, that is, what you want to be able to use that interview data for. You will need to make sure that there is congruency between this purpose and the amount of structure that your interview has. Using the Same Reflexive Thinking When Collecting Data Using Other Qualitative Methods The same type of thinking regarding how open or closed the parameters for collecting your qualitative data will be applies when using other qualitative methods. For example, observations vary in terms of the degree to which they are standardized reflecting that the “nature of observations vary along a wide continuum of possibilities” (Patton, 2002, p. 267). This variation arises from the different theoretical influences, inquiry paradigms, and disciplinary traditions that shape the form that those observations take. For example, there is much variation between types of observations in terms of how those observations are made and what those observations are for. Standardized observations are where you, as the researcher, have decided in advance what will be observed, when, and how, just as in a closed interview structure, the researcher has decided in
128  Research Design advance what will be asked, how, and in what order. In other words, you control what will be observed, and how it will be observed. The form that you record data about those observations will also be very structured—often in a form of an observation schedule. Standardized observations are often used to gain data about the frequency of particular designated activities, events, or actions that a researcher has decided are of interest. For example, your observations might be about how many times, when and by whom, medications were crushed when being administered to older people in residential aged care. Or they might be about counting how many times drivers of cars, and which categories of drivers, leave the car park of a large shopping mall without wearing a seat belt. Or counting the number of cars passing through a specific roundabout during a 24-hour period in order to map the times of day there is likely to be a traffic jam at that roundabout. In all these examples your observations will take the form of some sort of observational log of the frequency of the event of interest occurring. The data will be primarily numerical and used to describe the frequency of an aspect of interest or predict something related to that frequency. Therefore, standardized observations are most often associated with hypothesis testing and quantitative research approaches. In this type of observation, the researcher is a spectator (Patton, 2002) with a high degree of separation from the setting being observed. Observing what people do related to the area of interest from behind a two-way mirror would be an example of this type of observation (observer as spectator with complete separation from the setting). So would sitting at the back of a classroom and making observations without interacting with those being observed (observer as spectator, but making those observations from within the setting). However, in keeping with the underlying theoretical influences, inquiry paradigms, and disciplinary traditions on which most qualitative inquiry draws, qualitative research studies employ a type of observation associated with some kind of field work in a naturalistic setting. This way of observing is often referred to as entering the field. The researcher enters into, and observes, the everyday social settings and contexts in which people or activities of interest are located. This social setting is both where the researcher makes their observations, as well as part of what those observations are about. The researcher may be a full participant in the setting in which they are making their observations, or some mix of part participant and part observer (see page 277 in Patton, 2002). In this type of observation, the researcher is an important and central part of data collection. Observations occur in situ. They are embedded in, and arise from, the context in which they are made. Understandings and knowledge related to the problem the researcher is addressing are generated “by interacting, watching, listening, asking questions, collecting documents, making audio or video recordings, and reflecting after the fact” (Tracy, 2020, p. 76, drawing on Lofland & Lofland, 1995). The goal of such observation is to understand people and their actions in context. To gain such in-depth understanding requires a much longer period of immersion in that context than if the goal of the observations is to obtain information about the frequency or intensity of a specific action. It requires building rapport with those in the context being observed, as well as getting a sense of how interactions and actions related to what a researcher is interested in knowing more about “work” in that context. Consequently, in qualitative research studies using some form of observation to collect qualitative data (not just numerical data), observations are less standardized, and therefore less structured. The focus of the observations is therefore broader than focusing on a single element (Patton, 2002) in a social setting—such as whether a seat belt is worn or not. As
Chapter 6 • Obtaining Data Using Qualitative Approaches   129 the researcher, you do not set out to only observe specific predetermined events or count instances of them. Rather, you enter the field and collect observational data using “a logic and process of inquiry that is open-ended, flexible, opportunistic, and requires constant redefinition of what is problematic, based on facts gathered in concrete settings of human existence” (Jorgensen, 1989, p. 14). Here your goal is to understand what is going on in that context related to your research problem. Therefore, you exercise less control over exactly what you will observe. PUTTING IT INTO PRACTICE AN EXAMPLE OF PUTTING NONSTANDARDIZED OBSERVATION INTO PRACTICE You may be interested in understanding family mealtimes as a social event—how families share or do not share mealtimes and what role mealtimes play in how family relations develop. Therefore, you choose not to standardize, and therefore limit, in advance what aspects of family mealtimes you will make observations about (e.g., duration, timing). Instead, you remain open in terms of what you will observe related to family mealtimes. You are interested in getting a more holistic picture of what is going on (Patton, 2002) in the social setting called a family mealtime. What did the people you observe understand a family and mealtime to be? What happened at the mealtimes? What role do family mealtimes as a social event play? What roles do different family members play at mealtimes? Your data will take the form of detailed field notes about what happened at the mealtimes you observed. Once you have made some initial observations that have indicated possible areas of interest, you may then make further observations about those specific areas of interest—in some ways a bit like the use of probes in interviews. As was the case when making decisions about the amount of structure interviews will have in a qualitative research design, decisions about how standardized your observations will be depends on the purpose of your research. The key consideration is whether the way you are designing your observations is methodologically consistent with the qualitative approach framing your study, and in turn the inquiry paradigm and theoretical assumptions framing that qualitative approach. WILL YOU INTERVIEW YOUR PARTICIPANTS INDIVIDUALLY OR IN SOME FORM OF GROUP AND WHY? In this section, we take a closer look at the choices you will need to navigate and decide on related to whether you will use an individual or some form of group interview when collecting your data. Each option has advantages and disadvantages. You will have to make decisions about which of these ways of interviewing your participants you will use in your research design. This means that you will need to read and think about their relative strengths and weaknesses in the context of your overall research design. What can each of these ways of interviewing participants contribute to your study that other ways cannot? At the same time, what does that mean that you will miss out on, and does that matter?
130  Research Design Using a form of individual interview is one of the, if not the most, common ways that data is collected in qualitative research approaches. As its name suggests, this form of interview involves a one-to-one dialogue between an individual participant and the researcher. This is why you will see an individual interview sometimes referred to as an in-depth individual interview. On the other hand, group interviews, as their name suggests, involve interviewing more than one person at the same time. Focus Groups—A Specific Type of Interview A specific type of interview involving a group of people is a focus group. “Focus groups are a research method that collects qualitative data through group discussions. This definition contains two components: first, the goal of generating data, and second, the reliance on interaction” (Morgan, 2019, p. 4).4 It is this second aspect—the reliance on the group interaction—that sets focus groups apart from individual interviews. This is because while a focus group is an interview, the “twist is that, unlike a series of one-on-one interviews, in a focus group participants get to hear each other’s responses and to make additional comments beyond their own original responses as they hear what other people have to say” (Patton, 2002, p. 386). There is a range of views about how many people make up a focus group. This ranges anywhere from 3 to up to 12 people, although most views seem to land somewhere between 6 and 10 or 6 to 8 as “ideal” (see Liamputtong, 2011; Morgan, 2019). Carey and Asbury (2012) point out that having a smaller number of participants in a focus group “usually leads to greater depth of data, and small size is especially important for sensitive, complex topics. . . . With a small group, the facilitator can more easily manage the group dynamics, process the information, and attend to each member” (p. 45). The emphasis on, and importance of, facilitating group interaction when conducting focus groups is why the person who conducts the focus group interview (usually the researcher) is referred to as a moderator. The role of the interviewer is to moderate the group in such a way as to enable active, relevant, and respectful discussion of the topics of interest—the focus of the group. You will also note when reading about focus groups that it is often recommended that a notetaker be present as well. The role of the notetaker is to record things that happen during the group, such as nonverbal reactions to comments, as well as recording their thoughts and impressions of the information emerging from the discussion in the group. It is harder for the moderator to do this when they are concentrating on keeping the interactions in the group focused while ensuring that all members of the focus group have the opportunity to participate. However, in reality, often the notetaker and moderator are the same person largely because of resource constraints, particularly in the case of students doing their research. Which to Choose? In many ways, choosing between whether to use individual in-depth interviews or focus groups in your study design is really a matter of choosing between a series of trade-offs. For example, one trade-off in using focus groups as a form of collecting data as compared to using individual interviews is the loss of depth of the information that can be gained from each individual participant. On the other hand, a trade-off that you will need to make when using individual interviews instead of focus groups is the loss of data that could be
Chapter 6 • Obtaining Data Using Qualitative Approaches   131 gained from the group interaction of the participants that you are interviewing individually. Therefore, you will need to think about what is more important for you in terms of meeting the goals of your study. Is it the information and insights that can be gained from the discussion and interactions of the participants in the focus group, or is it the in-depth information that you can gain from each individual? What do you base your answer on? There are also factors related to feasibility5 that you may have to consider, and make tradeoffs about, as you must do when designing any form of research. For example, a feasibility design related consideration is the time and resources that you have available for doing the research. Students doing a time delimited study with little, if any, monetary support may not be able to conduct individual interviews that each require travel (a financial and time cost), and then transcription6 (a time cost). However, conducting and transcribing fewer focus groups may be feasible cost and timewise. On the other hand, if for some reason it is difficult to get potential participants together for a focus group in the same place at the same time (such as health-related issues preclude them from traveling), then it may not be possible or desirable to use focus groups in your study design. This is because the richness of the data obtained from a focus group relies on the interaction between the participants. Therefore, if “not enough people attend, this is a more severe problem than if one person misses her or his individual interview” (Morgan, 2019, p. 21). Hence, Morgan advises “if individual interviews and focus groups are likely to be equally productive for a given purpose, then sheer practicality may well favor individual interviews” (p. 21). Sometimes, depending on the time and resources available, it may not be necessary to make these trade-offs. It may be possible to incorporate both individual interviews and focus groups into your study design. When designing research, “[f]ocus groups and individual interviews are often complementary rather than competing methods” (Morgan, 2019, p. 21). The individual interviews in the study identify perspectives about what it is you are studying at the level of the individual. You might then use focus groups to probe further into those perspectives and, for example, their implications for a practice area of interest. However, when you are designing your research, it is important to remember that just having more data, or more interviews, or interviews with both individuals and groups in your research design, does not necessarily make that design stronger or even sound. It might, but this depends on whether having this extra data and these additional interviews in some way enables you to better answer or understand the problem that your research is designed to address.7 Therefore, the answer to the question of whether using some form of individual interview is “better” to use in your research design than some form of group interview such as a focus group, is that this depends. What it depends on is what you are hoping to be able to say something about after having completed your data collection and analysis. Put another way, what do you want your data to be able to be used for? Focus groups and individual interviews are only “better” than each other (or any other possible method of qualitative data collection) if they can “better” produce the type of data or information needed to “better” address the issues that a specific research study is being designed to illuminate. Such thinking about what better means is a conversation that you will need to have with yourself when thinking about, and using, any method that forms part of your research design.
132  Research Design WHAT WILL YOU ASK YOUR PARTICIPANTS IN THE INTERVIEW AND WHY? In the courses we teach about research design and qualitative inquiry, students who have decided that some sort of qualitative interview will be their method of choice often present us with what they have decided will be their interview guide. They seem quite surprised, at times even defensive, when we ask them why they have chosen those particular interview questions and not others. This is because in their mind these are the obvious and important questions to ask. The reason why we ask our students how they decided on this interview guide, and the questions that make it up, is because we want them to fold back on the thinking that has led them to come up with these questions. When doing so we want them to consider whether these are the optimal (or only) interview questions to ask in order to be able to obtain the in-depth information needed to address their research problem, and thereby achieve the purpose of their research. Put another way, we want to encourage students to get into a reflexive dialogue with themselves about why they have chosen these particular interview questions and not others. In so doing, we want this reflexive dialogue to highlight the thinking that students still might have to do before finalizing their interview guide. We want to remind them that “[t]he key to getting good data from interviewing is to ask good questions; asking good questions takes practice” (Merriam & Tisdell, 2016, p. 117). Therefore, before, during, and after that practice, they will need to do a lot of thinking both about each question and about the way that each question connects to the other questions in their interview guide. Thinking about the connections between the questions, not just each individual question, when designing your interview guide is important. Collectively the questions that make up that guide will provide the multifaceted and rich data that will enable your research questions to be addressed. While each question must be able to be justified in terms of what it will add to meeting the purposes of the study, it must also be able to be justified in terms of how it complements the other interview questions in your interview guide. Therefore, you will need to consider how the information obtained from a particular question adds to, or extends, the information obtained from the other questions in that guide. Not thinking this through well enough when you are designing your interview guide can lead to problems when you analyze your interview data. This is because it may not be clear how the data obtained from the various questions are related, and therefore how that set of not clearly related data enables you to address your overall research problem.8 Developing Lines of Inquiry Given these considerations, how might you go about designing a qualitative interview guide made up of a coherent and connected set of questions? One way to begin is to identify possible lines of inquiry (Patton, 2002) to guide the interview. Initial lines, or areas, of inquiry emerge from your thinking about what it is important for you to know more about to get an in-depth picture of what is going on in the situation that your research is focused on. Once you have identified your lines of inquiry, you can then develop questions related to each of them. How do you come up with initial possible lines, or areas, of inquiry when designing your interview guide? A good place to begin is to think about your research problem and
Chapter 6 • Obtaining Data Using Qualitative Approaches   133 questions—the purpose for doing your study. Within this broad area of interest, ask yourself what it is that you want to know more about, and why. Are there specific foci or areas of interest within the broad problem area that you want to know about? What is it about these foci and areas of interest that has triggered your interest? In addition, the findings of other researchers working in related areas to your proposed study might provide starting points for developing your lines of inquiry. You could also read critiques of these studies and use that critique as a starting point for thinking about what your lines of inquiry might be. Or, you might draw on your own or others’ experiences of, or hunches about, the situation or issue that is of interest. PUTTING IT INTO PRACTICE USING PHOTOGRAPHS BOTH AS A DATA SOURCE AND TO DEVELOP LINES OF INQUIRY In their study of body image in middle-to-older age women with and without multiple sclerosis, Bailey et al. (2021) asked seven women to provide up to 10 photographs “that represented their body image” (p. 1542) as well as to participate in a one-on-one interview. In the interview, the photographs provided the impetus for four of the six major areas or lines of inquiry. For example, participants were asked why they had selected a particular photograph, and how it represented their body image including how they felt, saw, thought, and acted toward their body. You can find the full interview guide in Table 1 on page 1545 in Bailey et al. (2021). This study provides a good example of how visual texts and methods can be used as data sources in qualitative research. Using lines of inquiry as an initial organizing construct for your qualitative interview rather than a list of discrete, standardized, and fixed questions opens up, rather than closes down, the interview. Lines of inquiry do not determine how that interview will proceed in terms of what will be asked and how answers can be given. This is not the case in interviews that have a closed structure. In a closed interview, the person being interviewed may be required to give what is known as a forced response by choosing an answer from a list of predetermined responses such as agree, strongly agree, disagree, strongly disagree. Further, in line with the flexible and emergent nature of qualitative research design,9 additional lines of inquiry may emerge once you begin to collect data obtained from your interviews. This occurs when participants raise issues or areas of interest that you may not have anticipated, or even thought about, prior to the interview. In addition, the use of probes during an interview will enable you to pursue new and interesting leads as the interview progresses or to clarify aspects of the answers given to the questions you have asked. For example, when being interviewed, a participant in a study about what it is like to move from home to a residential aged care facility says, “Well, when I moved it was chaotic. It was really difficult.”10 A probe to an answer like that might be “You describe this situation as chaotic. What was chaotic about it?” Here you are probing and unpacking the
134  Research Design information that they have given you. The interview probe helps ensure the richness of the information gained. Just reporting that the situation was chaotic does not necessarily tell us in depth what about the situation was chaotic. Similarly, you could also probe what the participant meant by “it was really difficult.” Thoughtful use of probes when interviewing is important to enable rich and in-depth interview data that can move beyond description of events (e.g., they were chaotic) to provide insights into participants’ understandings of those events (why they were chaotic) and the contexts in which those understandings are derived. Seeking more details, or such elaboration, is a key part of the analysis and interpretation of the interview data that begins during the interview itself.11 It is important that you document and justify any changes to the interview guide or questions that occur as each interview or your overall study progresses. This is “an important part of documenting the process of the study” (Carey & Asbury, 2012, p. 50). It is also an important part of ensuring the trustworthiness of your qualitative research design, and therefore the findings that emerge from it. How Many Lines of Inquiry and Associated Questions Are Ideal for an Interview Guide? It is important not to have too many lines of inquiry, and questions associated with each of them. If you have too many lines of inquiry, then the interview guide will probably end up taking more the form of some sort of structured interview. The large number of lines of inquiry, and questions related to them, limits the opportunities that you will have to explore, and thereby gain in-depth information about each of those lines of inquiry or the questions asked about each of them. For example, if you had 10 lines of inquiry and three questions related to each of them, you would have 10 areas of inquiry to explore and some 30 questions to ask in the interview. This may not leave much time, or no time at all, to follow up on what the respondents say when addressing those areas or questions. This will at best result in the loss of some depth, and therefore richness, of the interview data. At worst, it will result in the production of shallow and thin data or “superficial snippets” (Carey & Asbury, 2012, p. 50) that range across a long list of areas that you have identified as being important to explore. This data will be of limited use in addressing a research problem for which you are seeking in-depth, rich qualitatively derived information. So how many lines of inquiry and associated questions are ideal for an interview guide? Drawing on years of experience of doing and supervising qualitative research, Merriam and Tisdell (2016), suggest that the fewer, more open-ended your questions are, the better. Having fewer broader questions unhooks you from the interview guide and enables you to really listen to what your participant has to share, which in turn enables you to better follow avenues of inquiry that will yield potentially rich contributions. (p. 126) Fewer, broader questions also ensures that you are not focusing more on how to get through all the questions making up your ambitious interview guide than on what your participant is saying. Hence, Carey and Asbury (2012) suggest limiting “guideline questions to three or four, with subquestions used to further explore each question” (p. 51). You can then use probes to explore what is being said about each question during the interview.
Chapter 6 • Obtaining Data Using Qualitative Approaches   135 TIP DIFFERENT QUESTIONS FOR DIFFERENT PURPOSES What the discussion highlights is that just using your broad research questions as interview questions is not likely to enable you to obtain the in-depth and rich data you were hoping to obtain by using interviews as your method. Your research questions need to be thought through carefully and broken down into initial lines of inquiry. This is because research questions are at a much higher level than what your interview questions will be. They are also likely to be framed in very different ways than well-developed interview questions. In short, they are different sets of questions developed for different purposes. Designing Good Interview Questions However, just having fewer lines of inquiry, and questions related to them, in itself is not enough to ensure depth, or richness, of the information that each interview can give us. When designing our interview guide, we will also need to take a very close look at the way that the questions associated with those lines of inquiry are structured. This includes the specific wording used when asking a question. The way that the questions are worded, and structured, affects whether or not it is possible for in-depth, rich information to emerge from the interview. Therefore, when you are designing your research, as much care needs to go into the design of each question as it does into the design of the overall interview guide, and before that, the choice of interviews as a method in your overall research design. Some of the things that you will need to think about are asking one question at a time, avoiding asking dichotomous questions, and not “leading” participants to respond in the way that they did. Ask One Question at a Time When developing your interview guide, you will need to think about how to design interview questions that are, “at a minimum, . . . open ended, neutral, singular, and clear” (Patton, 2002, p. 353). To ask clear singular questions requires that “no more than one idea should be contained in any given question” (Patton, 2002, p. 358). This is to avoid the 100 questions in 1 question syndrome! An example of this syndrome is the following question, which is in fact a bundle of four different questions: “You recently moved from home to hospital to residential aged care facility—correct? (Question 1) What went well in the move (Question 2) and what did not (Question 3) and why do you think that this was? (Question 4).” If you try to remember that question now after just reading it, maybe you can recall the first question asked, or the last one. However, it is very likely you will have forgotten the other questions in between them. This makes it likely that you would forget to provide information about what went well, and what did not. Even if you did, it is unlikely that you would remember to respond to the question “Why do you think that this was?” in relation to both of the moves being asked about—the move from home to hospital, and then the move from hospital to residential aged care facility. All these questions need to be untangled and made into singular questions related to a clear line of inquiry.
136  Research Design Avoid Asking Dichotomous and Therefore Redundant or Limiting Questions When developing your interview questions, it is also important to avoid asking dichotomous questions. These are questions “with a grammatical structure suggesting a ‘yes’ or ‘no’ answer” (Patton, 2002, p. 354). For example, “Did you feel sad when you had to leave your home?” You will have to follow up with a number of other questions to get any depth in the answer given (e.g., yes/no), which in effect makes the dichotomous question redundant. Asking dichotomous questions also has a limiting effect on the interview. For example, the question “Did you feel sad when you had to leave your home?” means that participants are limited to answering something related to feeling about sad (yes, no, maybe, don’t know). Even if you ask them why they did or did not feel sad, they are limited to answering in relation to whether they were sad or not—which is the idea that you asked them about. In effect this is a form of leading—see the discussion in the next section about what a leading question is. Don’t Ask Leading Questions or Make Leading Comments, When Interviewing A leading question is when your question leads the person to talk about or agree with something when answering a specific question. Or you make a comment in response to an answer that the participant has given you during the interview that leads them to reply in a certain way. For example, they tell you when being interviewed that they don’t know anyone in the residential aged care facility. You then comment, “Oh, given that you don’t know anyone in the residential aged care facility, I bet you miss your neighbors.” The participant agrees that they do miss their neighbors. You then claim that your research “found” that this participant said they miss their neighbors. However, what in fact you found was that this participant agreed with your suggestion that they must be missing their neighbors. You have no way of knowing if “missing their neighbors” was an answer that they would have given, or a subject they would have talked about, if asked a more open question about how they felt about the move. Try Out Your Draft Lines of Inquiry and Questions Before You Do Your Interviews Once you have an idea of what your initial lines of inquiry are, and what the questions associated with each line of inquiry might be, it is important to try that draft interview guide out in practice. This is to sharpen, polish, and rehearse that interview guide. The trial interview can be done with a volunteer who knows something about the area you are asking about, but who is not going to be part of your actual study. If possible, it is important that they also know something about qualitative inquiry and interviews. If you are unable to find a person fitting both criteria, then you may think about rehearsing your interview guide twice—initially with a person familiar with the substantive focus of the interview, and then with a person familiar with the methods you are using. In the trial interview, you can try all or some of the ways suggested in the following activity box for getting feedback about how the interview guide works or could be improved. The purpose of the trial interview is getting feedback about how the overall interview guide, as well as each individual question in it, works or does not work. It is also to get feedback about how you did when interviewing—did you rush things, did you make any leading comments, did you miss opportunities to probe something that was said, and so on.
Chapter 6 • Obtaining Data Using Qualitative Approaches   137 Activity Ideas for How to Polish and Rehearse Your Draft Interview Guide Find someone (two people if necessary) who knows something about the area of your study, or qualitative interviews. Ask them if they are willing to volunteer to give you honest feedback so that the outcome of the test interview can provide you with information enabling you to modify your draft interview guide, and way of interviewing, if needed. • When you ask the volunteer your questions, instead of them answering them, • • • • • first ask them to tell you what they think you are asking. This will tell you if your questions are clear enough (remembering that clear is not the same as directive). Then ask them to answer the questions to see if those questions are open enough to obtain in-depth information. When you are doing so, note how long it takes you to ask them about each of the questions you have in your draft interview guide. This will help you decide whether the demands on the person being interviewed in terms of time needed to attend the interview are reasonable. As we have discussed earlier in this book, this is in fact an ethical issue.12 It will also help avoid interview related fatigue during the interview caused by having to think through, process, and respond to so many different questions. Think about who is doing most of the talking. Is it you asking the questions or the person trying to think about, and then answer them? How can you optimize this interaction in the interview? Be aware of whether you feel under pressure to get through the questions in the time allocated for the interview and if this affected the way that you are able to respond to, probe, and follow up what the person being interviewed said. Ask the volunteer for feedback during and after the interview. Remember, it is this feedback that is the main purpose of this type of trial interview. Applying the Same Type of Thinking to Other Types of Qualitative Methods When collecting qualitative data using interviews, you make choices about what you will ask participants about. Your goal is to obtain data, the analysis of which can enable you to address your research problems or questions. The choices that you make have that goal in mind. In the same way, when collecting qualitative data by making observations, you make choices about what you will observe. These choices reflect your goal of obtaining data, the analysis of which can enable you to address your research problems or questions. When interviewing, lines of inquiry provide you with a way of guiding and organizing what you will ask your participants about, while not constraining the interview to a list of preset questions. When making observations in social settings, “sensitizing concepts” (Blumer, 1954, p. 7) can play a similar role to that played by lines of inquiry in qualitative interviews. They provide initial “jumping-off points or lenses” (Tracy, 2020, p. 29) when
138  Research Design collecting data using observations. Sensitizing concepts for observations in qualitative inquiry also provide a way of organizing “the complex stimuli experienced so that observing that becomes and remains manageable” (Patton, 2002, p. 278). Examples of sensitizing concepts include rituals (such as how people dress, how they meet each other, where they sit), incentives (such as how people are encouraged to behave in one way and not another), codes both spoken and unspoken (such as sitting in a certain place or speaking in a certain order at a business meeting), and routines (when and why things happen regularly in particular social settings or parts of that social setting).13 What sensitizing concepts you choose to use when making observations will emerge from your thinking about the areas or aspects in the social setting or context you are observing that are important for you to know more about. This is because those areas or aspects will enable you to develop an in-depth picture of what is going on related to the issue or problem that is the focus of your research. In this way, sensitizing concepts provide an initial organizing construct for what you observe. You watch for “incidents, interactions, and conversations that illuminate these sensitizing concepts” (Patton, 2002, p. 279) in a particular social setting. However, like lines of inquiry in an interview, sensitizing concepts act as guides. They do not prescribe what you observe, when, and how. Nor do they remain unaffected by what is observed. As observational data is collected, and analyzed, new or different sensitizing concepts may emerge, or your original sensitizing concepts may be modified in some way. In such an iterative and reflexive process, the observer “moves from sensitizing concepts to the immediate world of social experience and permits that world to shape and modify his [sic] conceptual framework” (Denzin, 1971, p. 168). Just as too many lines of inquiry in an interview guide can constrain the opportunities that you will have to explore, and thereby gain in-depth information about, each of those lines of inquiry, so too can having too many sensitizing concepts in an observational study. Overusing sensitizing concepts when collecting qualitative data using observations can in fact “become desensitizing” (Patton, 2015, p. 359). This is because if you have too many things you are trying to observe, this can in fact desensitize you to what else you might be seeing that could be relevant to your research problem. Like lines of inquiry in interview guides, sensitizing concepts related to observations are there to guide, not constrain. As with the draft lines of inquiry in an interview, it is important to try out your draft sensitizing concepts in practice. Reflexively thinking through what worked, and what did not, when you did so will help you refine and develop the sensitizing concepts which will form the organizing construct for your observations. At the same time, you can use this experience to refine and develop your observational skills. PUTTING IT INTO PRACTICE LEARNING FROM OTHERS ABOUT HOW SENSITIZING CONCEPTS MAY BE PUT INTO ACTION Helene Aksøy’s (2009) study of issues for front-line nurse managers arising from a change in home care service in Norway provides a good example of reflexive thinking when putting the idea of sensitizing concepts into practice. Her study comprised three case studies designed to “explore what goes on in a nurse manager’s professional life”
Chapter 6 • Obtaining Data Using Qualitative Approaches   139 (Aksøy, 2009, p. iii) as they deal with constant change. Qualitative data was collected using observations, focused conversations related to the observations, and analysis of texts (such as minutes of meetings or official communications) related to the change. Aksøy noted that, as she was a master’s student and therefore “a novice researcher entering into a study that by definition had no set formula” to follow, she “felt a strong need for guidelines when designing the study and deciding how my data was to be collected” (both quotes from Aksøy, 2009, p. 43). This led her to use the theoretical threads that shaped and informed the study to develop a list of sensitizing concepts that could “help guide observations while out in the field and the organizing of field-notes later” (p. 43). She reports that this “list later proved vital when trying to get a sense of the whole by looking at parts when I started my observations. It helped me focus my observations during field-work” (p. 44). She notes that while her focus was narrow in the sense that she wanted to observe how issues arising from the change affected the nurse managers’ daily work, my observations were wide. I was observing and looking for my topic of interest in a number of different places and situations. Silverman (2006) stresses the need to both funnel and overview at the same time while observing, and as hard as it might sound, I really tried to do just that, by observing an array of different situations, but focusing on the nurse managers the whole time. (pp. 47–48)14 She also used focused conversations (Street, 1995) “to probe and deepen what I had already observed during field observations. I made the decision to use focused conversations based on the fact that I did not yet know what I was going to observe during the field observations” (Aksøy, 2009, p. 49). The sensitizing concepts used by Aksøy were • Geographical setting. Where are the offices placed? How does the staff access their patients? • Working environment: Are the offices crowded? Does the nurse manager have her own office, or does she share her workspace with others? • Social environment. Who are present? What is going on? When is this happening? • Historical perspectives. The history of the home care nursing services in each municipality. Number, nature, and result of organizational changes • History of recruiting staff. Is it easy or difficult to recruit staff? Do people stay in their jobs when recruited or is the turnover of staff high? • Description of planned activities and structured interaction. What kind of formal meetings are held? Who are present? What is happening? • Observing and describing unplanned activities and informal and impromptu interaction. Describe situations that arise during the day. What does the nurse manager do? • Observe and describe interactions with others. Who does the nurse manager meet during a day? Are there drop-in visitors? When does she meet her staff? When does she meet senior management? • Comment on non-occurences—what does not happen? “What does not happen?” or rather, what was not talked about, turned out to be a vital finding of this study. • Record special language. Identify insider language—explain terms used. • Observe nonverbal communication. Observe NM [nurse managers—our addition] in interaction with others in both planned and unplanned activities. • Analyze documents. (pp. 43–44) Aksøy’s study is an excellent example of what you will need to think about and justify when putting observations into practice. She does not gloss over the struggles that she had when doing the research, thereby providing useful and important insights into putting research design into practice. She highlights that this is a very challenging and messy process at times. In so doing, she provides a frank and honest account of reflexive thinking in practice as a research study unfolds—sometimes in unexpected ways.
140  Research Design WHO WILL YOU INTERVIEW AND WHY? A qualitative research interview, in keeping with the methodological assumptions that it is embedded in, seeks people’s perceptions or understandings or constructions or experiences of situations or contexts related to the research problem that your research is being designed to address. The aim of the interview is to obtain in-depth, layered, and richly nuanced information about that situation or event. Therefore, the people we select to interview in our study, our study sample, will be chosen because they are information rich in some way about that situation or event or the context in which it occurred. When selecting our study sample, we purposefully seek to find people to interview who can tell us something about what we want to know about—the purpose of the study. This type of sampling is referred to as purposeful sampling. Likewise, if our research design includes using observations or texts as our source of qualitative data, we will need to decide who, and what, we will observe or which texts we will include in our sample. These sampling considerations will also be based on the underlying principle of purposefully seeking to find sites or people to observe, or texts to analyze, that are information rich about whatever it is that is the focus and purpose of the study. This is because the logic and power of purposeful sampling lies in selecting information-rich sites and examples for study in depth (Patton, 2002). Choosing Between Different Types of Purposeful Sampling Plans in Your Study Design There are many ways that purposeful sampling is put into practice. While there is not agreement among qualitative researchers about how many types of purposeful sampling there are, or what they are called, Tracy (2020) provides a useful overview of what she terms eight purposeful sampling plans. They are included in Table 6.1 below. Each of these purposeful sampling plans is underpinned by a different logic that “serves a particular purpose” (Patton, 2002 p. 230). It is this logic that needs to be thought TABLE 6.1 ■ Commonly Used Purposeful Sampling Plans. Taken from Tracy (2020, p. 86, Tips and Tools 4.1). Reproduced With Permission Sampling plans Type of sample Purpose Random Creates an equal opportunity for all the members of a certain population to be chosen Convenience/ Opportunistic Appropriate when time and money are scarce, but may indicate laziness Maximum variation Includes the entire rainbow of possible data. Helps to ensure the inclusion of usually marginalized data Snowball Expands in size as the researcher askes study participants to recommend other participants Theoretical construct Helpful for testing and finding gaps in existing theory Typical instance Focuses on routine, the average and the typical
Chapter 6 • Obtaining Data Using Qualitative Approaches   141 Sampling plans, continued Type of sample Purpose Extreme instance The most/least/best/worst of a certain category. Can be valuable and interesting, but also time-consuming Critical instance Focuses on data that are rare, under-studied, or strategically bounded to the argument at hand; can help create logical deductions that show how findings are transferable to other populations through and used to decide which (or which combination) of these purposeful sampling strategies will enable you to meet the goals of your research.15 For example, consider the purposeful sampling strategy known as maximum variation. “A maximum variation sample is one in which researchers access a range of data or participants who will represent wide variations of the phenomena under study” (Tracy, 2020, p. 83). If you choose to use some form of maximum variation purposeful sampling in your study, there must be a reason why you need this variation in order to be able to address what it is that you are focused on finding out more about in your study. This reason must be congruent with the logic behind maximum variation purposeful sampling. This logic is that “[a]ny common patterns that emerge from great variation are of a particular interest and value in capturing the core experiences and central, shared dimensions of a setting or phenomenon” (Patton, 2002, p. 235). It also might reflect your desire to help “ensure the inclusion of usually marginalized data” (Tracy, 2020, p. 86). Most qualitative research approaches, and designs, draw on non-positivistic inquiry paradigms. In these non-positivistic qualitative approaches, maximizing variation is about maximizing the variation among the information-rich sites, texts, or people that comprise our sample. The sites, texts, or participants in our study sample are all information rich, but they vary in some way. For example, the geographical location of the information- rich sites you choose to make observations about may vary, as might the level of experience of the participants you choose to interview. However, what all the sites and all the participants have in common is that they are all information-rich in terms of what they can tell us about the problem or questions that our research is being designed to address. Your reason for maximizing the variation in your purposeful sample of sites, texts, or participants must be able to be justified in terms of such variation adding complexity and breadth (Tracy, 2020), and thereby depth and richness, to your data about your research problem or questions. TIP HOW TO ANSWER THE QUESTION OF HOW MUCH VARIATION IS NEEDED TO BE CONSIDERED MAXIMUM VARIATION Our students often ask us, “Do I need one of everything?” when they are thinking about employing maximum variation purposeful sampling in their study. They ask this question because they are worried that maximum variation purposeful sampling means they need to have every possible type of variation that exists between the information-rich sites, or participants, present in the sample for the study. For example, if they are studying residential aged care facilities, do they need to have one city, country, large, small, public, private, established, new, and so on residential aged care facility in their study? And then within
142  Research Design each of the sites, one of every type of potential information-rich participant, for example, each possible level of experience, age, gender, ethnicity, educational and professional training, and so on? We answer this question with a question. We ask them why they think that they need to have one of everything. What does that enable them to say that is important to answering their research questions? As they think about this, we remind them that it is important to remember that the logic underpinning the choice of maximum variation sampling is not the same as the logic underpinning representative sampling. Representative sampling draws on understandings derived from positivist and post-positivist inquiry paradigms. Within such inquiry paradigms, “a representative sample is arguably needed, involving representatives of each of the sub-segments of the total population to be researched” (Boddy, 2016, p. 431) in order to enable statistical generalization to the broad population from which the sample is drawn. In a qualitative research design that does not draw on positivist or post-positivist inquiry paradigms, statistical generalization to the broad population from which the sample is drawn (e.g., all types of residential aged care facilities) is not the purpose of the study. Rather the purpose is to develop in-depth and thick interpretations of what is going on related to the aspects of the residential aged care facility as a social setting that your research is focused on. Therefore, as we stated previously, your reason for maximizing the variation in your purposeful sample of sites, texts, or participants must be able to be justified in terms of adding complexity and breadth (Tracy, 2020), and thereby depth and richness, to your data about your research problem or questions. The take home point here is that choices between different types of sampling strategies when collecting qualitative data must be congruent with both the purpose of the research and the methodological assumptions underpinning the research design. Putting Purposeful Sampling Into Practice When Designing Your Research How do you actually put purposeful sampling into practice when designing your research? A useful way to address this question is to share an example of how we have thought about and used the logic of purposeful sampling when designing our research. In this example, which can be found in the Putting It Into Practice box below, we have cut and pasted in sections from parts of the research design that was the basis for the application for funding for our study “Digitalization, Organisations and People: Issues and Challenges. Exploring the Digitalization Imperative in Higher Education Institutions in Norway.” Using these excerpts, we demonstrate the logic behind the decisions we made about the sampling we were proposing to use in that study. PUTTING IT INTO PRACTICE PUTTING THE LOGIC OF PURPOSEFUL SAMPLING INTO PRACTICE WHEN DESIGNING RESEARCH We wanted to study the effects of the imperative to “go digital” in higher education institutions in Norway. Therefore, we wrote a proposal to seek funding for a research study that we called “Digitalization, Organisations and People: Issues and Challenges. Exploring the Digitalization Imperative in Higher Education Institutions in Norway” (Cheek & Øby, 2018).
Chapter 6 • Obtaining Data Using Qualitative Approaches   143 The purpose of the study was to 1. Explore how digitalization is understood and enacted in higher education institutions in Norway. 2. Identify and describe issues such digitalization poses for people in those institutions. 3. Explore and identify implications of the findings from Aims 1 and 2 for how to optimize the implementation of, and therefore potential offered by, digitalization of higher education. Sampling: Selection of Study Sites and Participants In keeping with the principles of qualitative inquiry, the study used a purposeful sampling strategy. The sampling in the study was purposeful at three levels: 1. Choice of higher education as the organizational context in which the study will be based 2. The higher education institutions selected as the sites for the study 3. The participants selected at various stages of the study Justification for the Three Levels of Purposeful Sampling 1. The higher education sector The logic behind the purposeful choice of the higher education section as the site for the study relates to the fact that the entire Norwegian sector of higher education is currently affected by the government’s overt emphasis on the need for this sector to embrace digitalization. This overt emphasis is apparent in the recent report Digitalization Strategy in Higher Education and Research 2017–2021 (our translation), published September 19, 2017.16 The report reminds readers that institutions in higher education are “subject to the ministry’s authority and instruction” (Kunnskapsdepartementet, 2017, p. 27, our translation), in this case to digitalize, and that the institutions themselves are responsible for implementing digitalization initiatives. Thus, Norwegian higher education provided a timely, and information-rich context, for our study of the connections between people, organizations, and digitalization. 2. Specific higher education institutions as study sites A total of four higher education institutions would be selected according to the following grid (see Table 6.2 below) which employs the principles of maximum variation sampling (including opposites). The logic behind this type of sampling is that “common patterns that emerge from such variation are of particular interest and value in TABLE 6.2 ■ Categories of Higher Education Institutions From Which Study Sites Will Be Selected Government Nongovernment Research intensive Government higher education institution designated research intensive—university or specialized university status Nongovernment higher education institution designated research intensive—university or specialized university Non- research intensive Government higher education institution designated non-research intensive—public University College status reflecting a tradition of focus on teaching rather than research Nongovernment higher education institutions designated nonresearch intensive and/or teaching intensive—have individual study program accreditation (e.g., bachelor studies), but not organizational accreditation
144  Research Design capturing the core experiences and central, shared dimensions of a setting or phenomenon” (Patton, 2002, p. 235)—in this study the complex intersections between digitalization, organizations and people in them. 3. Selection of participants Participants would be drawn from staff who are employed at the study sites and would include • Administrators of varying levels and experience at each site • Academic staff of varying levels and experience at each site • Managers at varying levels who contribute to or determine policy related to how digitalization occurs at each site Purposefully selected academics, administrators, and managers selected from each of the four sites using critical case and “snowball” (Patton, 2002, p. 243) purposeful sampling would be invited to tell their “stories” of specific digitalization-related incidents in which they have been involved. Key points that the example in the box above highlights about using purposeful sampling to obtain the qualitative data needed for addressing your research problem are that • Thinking needs to be done about every sampling choice that you make in your research design. • There are different levels of sampling choices to be made when designing your research. For example, what instance, example, or case of what you are interested in will be studied (e.g., digitalization in higher education); where will it be studied (the specific higher education institutions); and who will be asked about it (the specific participants in those higher education institutions). • It is possible to employ different strategies of purposeful sampling at different levels or within levels of the study. For example, we chose higher education as the overarching information-rich case or instance of digitalization to study. This is because due to the government requirement that organizations in the higher education sector in Norway digitalize, they are information-rich sites about issues and challenges arising from the intersection of digitalization, organizations, and people. • Having decided this, we then chose to employ a number of purposeful sampling strategies within this overarching choice. We used the strategy of maximum variation sampling to select which specific higher education institutions we would use as the actual sites for the study. Following this we used “critical case” and “snowball” purposeful sampling strategies (Patton, 2002, pp. 236–238) to identify which people in those sites we would interview. • Each level of this cascade of choices related to sampling strategies to be used needed to be justified and explained. This required us to add an “and why” to our thinking about every decision that we made about our sampling. It was part of ensuring the trustworthiness and credibility of our research design.
Chapter 6 • Obtaining Data Using Qualitative Approaches   145 PUTTING IT INTO PRACTICE GOING DIGITAL—HOW DOES THIS AFFECT HOW QUALITATIVE DATA IS COLLECTED? The internet provides us with tools and strategies that can be used for collecting qualitative data online. For example, in-depth individual interviews or focus groups can be conducted verbally and virtually over platforms such as Zoom, Teams, or Skype. This can be very useful when it is not possible to meet in person for the interview, for example, during the COVID-19 pandemic. Alternatively, you may choose to include some form of online interviews in your research design because of feasibility considerations. For example, the purposefully selected participants for your focus group live in different countries (e.g., if your study is about something to do with how CEOs of major companies in different countries perceive good leadership post–COVID). However, even when you have decided to interview online, you still have some decisions to make. One of them is whether choosing to interview online will affect the ability of your information-rich participants to be able to participate in your study. For example, depending on what you are studying and who makes up your purposeful sample, some participants in your sample may not have access to computers or be familiar with or comfortable using platforms such as Zoom. You will also need to decide if you will conduct online interviews comprised of verbal exchanges or use some form of textual exchanges such as using email, or combinations of both, such as a verbal exchange over Zoom followed up by an email exchange. If you choose to interview using some form of email exchange with a participant, you will also need to decide if that exchange will occur in a set time frame when both you and your participant are online at the same time (known as synchronous interviews), or will you send and receive emails over a longer period of time when you both may not be present online at the same time (known as asynchronous interviews). If using focus groups, you may design the focus groups around asynchronous written exchanges on electronic bulletin boards or other types of online textually based forums. In this case, you might place an initial question on that forum or bulletin board after which the participants respond to your question and to each other’s responses about that question. In this way, a series of discussion threads is developed for each of the questions that you pose (see Morgan, 2019). Alternatively, you may choose to have all participants in the virtual focus group present at the same time. You can do this by either having them all on video links at the same time (e.g., at a Zoom meeting) or you may have them all participating in a text-based email exchange between a set period of time. No matter what way you decide to design your online interviews or focus groups, you will need to make sure that technical issues do not interfere with the interview or focus group quality—for example, connection problems resulting in participants dropping in and out of meetings. The take-home point from all this is that collecting qualitative data online introduces another layer of issues that you will need to think through, make decisions about and justify, when designing your qualitative research. One of these is why using online methods to collect data is appropriate for your qualitative research study. Another is why using the specific form of the online method you have chosen to use is appropriate for the study. When thinking about this, “the nature of the topic and the characteristics of the desired participants will be crucial in deciding whether to use online groups [or interviews], and if so, which form” (Morgan, 2019, p. 123).
146  Research Design CONCLUSIONS In this chapter, we have explored some of the interrelated design decisions you will need to make related to collecting qualitative data and the methods that you will use to do so. Choosing to use a particular type of qualitative method to obtain your qualitative data is just the beginning of the thinking that you will need to do about that method. As our discussion has highlighted, there is still a lot of thinking to be done after making that choice. You will need to put as much effort into designing how the method(s) that you have chosen to use will be put into practice, as you did into the choice of those methods in the first place. For example, if your method of choice is some form of qualitative interview, you will need to consider, and make choices about, how structured the interviews in your study will be, whether they will be individual or group interviews of some kind, what you will ask in those interviews, and who you will interview. The credibility of the method you use to collect qualitative data, and therefore the trustworthiness of the research, depends on how you justify these choices. Such justification includes demonstrating that the form that the methods take, for example, the type of sampling that they use or the amount of structure that they have, is congruent with the methodological assumptions underpinning your research. It also requires you to show how the methods you use, and the type of data you obtain by using those methods in the way that you do, enables you to address your research problem or questions. Such incremental, iterative, and connected thinking needs to be done no matter which qualitative research approaches, or the methods associated with them, you are using in your research design. In putting the emphasis on the thinking about methods and not simply methods as techniques throughout this chapter, we hope we have disrupted the way that methods are very often presented and discussed. In these discussions, methods are removed from the context of the research design that they are part of and presented as self-contained procedures—procedures that in turn are made up of stand-alone sets of techniques that can simply be picked up and inserted into a research design. In the next chapter, we continue our discussion of how to put the methodological thinking associated with, and underpinning, qualitative research approaches into practice when designing research. Our focus is the series of decisions you will need to make about how you will analyze the qualitative data that you collect. Such analysis begins from the moment you obtain your first qualitative data and continues throughout the qualitative research process. It affects other design considerations such as sample size. Therefore, how and when this analysis occurs in qualitative research, and why, are important considerations when you design your research. SUMMARY OF KEY POINTS • There is much variation between types of qualitative methods. • Qualitative methods do not have a standard form; disciplinary and theoretical paradigms and traditions shape the way that a qualitative method is understood and put into practice.
Chapter 6 • Obtaining Data Using Qualitative Approaches   147 • You will need to think about what you want your data to be able to be used for, and why, when choosing what type of qualitative method (e.g., an interview or an observation) to employ in your research design. • However, choosing a specific qualitative method to collect qualitative data as part of your research design is just the beginning of the thinking you will have to do about that method. • For example, if using interviews to collect qualitative data, you will also need to think and make choices about how structured the interview will be, who you will interview, whether to use individual interviews or group interviews, as well as what questions to ask. • Similar principles apply when using other qualitative methods. For example, using observations you will need to think, and make choices, about how structured your observations will be, as well as who, what, and where you will observe. • All choices you make when putting a qualitative method into practice must be justified and explained. This is part of ensuring the trustworthiness of the research and the credibility of your research design. • Collectively, this thinking, and the choices and decisions associated with that thinking, will determine the form that the method you use in your study will take, and how that method will be put into practice. • Therefore, before being able to use any qualitative method to collect data that will enable you to answer your research question(s), there is a series of questions you need to ask yourself, and decide upon, about that method. • All decisions you make should also involve thinking about the feasibility of the study. • Deciding to use some form of online method to collect data raises further design considerations. • It is important to remember that methods are not just about collecting data, they also provide strategies for analyzing that data. The next chapter develops this key point. KEY RESEARCH-RELATED TERMS INTRODUCED IN THIS CHAPTER focus groups individual interviews interview guide interview probes lines of inquiry purposeful sampling semistructured interviews sensitizing concepts for observations in qualitative inquiry standardized observations structured or closed interviews unstructured or open interviews SUPPLEMENTAL ACTIVITIES 1. Tracy (2020) highlights eight purposeful sampling plans that may be used when collecting qualitative data. You will find them in Table 6.1 in this chapter. For each of these sampling plans, work out
148  Research Design • • • • 2. A research question that might use this type of sampling strategy What inquiry paradigm(s) might use that strategy, and which would not The benefits or strengths that each type of sampling has The disadvantage or limitations that each has Find an article that is a published report of qualitative research that used interviews to collect the qualitative data. When reading this report ask yourself these questions: Do the authors make clear • • • • what the purpose of the research was? why they used a qualitative approach? why they used interviews as their method of collecting qualitative data? what disciplinary or inquiry traditions, or theoretical schools of thought, the authors have drawn on related to the interviews and how this affected their view of what type of interview to use? • what choices they made related to a. how structured their qualitative interviews were? b. whether to interview participants individually, or in some form of group? c. what they asked participants in the interview? d. who they interviewed? The answers to these questions are part of establishing the credibility of that reported research. How convinced a reader is of that credibility is based on what the authors report about these types of choices. If you struggle to find an article, ask your tutor or fellow students for recommendations relevant to your area of study. FURTHER READINGS Denzin, N. K., & Lincoln, Y. S. (Eds.). (2018a). The SAGE handbook of qualitative research (5th ed.). SAGE. Tracy, S. J. (2020). Qualitative research methods: Collecting evidence, crafting analysis, communicating impact (2nd ed.). John Wiley. Yin, R. K. (2016). Qualitative research from start to finish (2nd ed.). Guilford Press. NOTES 1. This is a point that we will return to in the next chapter which focuses specifically on the analysis of qualitative data. 2. In Chapter 9, we return to discuss closed questions. 3. See Chapter 8. 4. Morgan (2019, pp. 5–6) points out that one of the issues regarding the definition of focus groups “is whether they are different from group interviews. Although some early authors (e.g., Frey & Fontana, 1991) made a distinction between focus groups and group interviews, the broad tendency since then has been to treat the two labels as synonyms. As a result, ‘focus groups’ is an umbrella term encompassing many alternative formats.” 5. See the discussion of feasibility in Chapter 3.
Chapter 6 • Obtaining Data Using Qualitative Approaches   149 6. See Chapter 7, where transcription is discussed. 7. We return to this point in Chapter 7 which explores sample size considerations when analyzing qualitative data. 8. We return to this point, and develop it, in Chapter 7. 9. See Chapter 5. 10. See Cheek, J., & Ballantyne, A. (2001) Moving Them On and In: The process of searching for and selecting an aged care facility. Qualitative Health Research, 11(2), 221–237. 11. See Chapter 7 where this iterative, ongoing analysis process, involving the simultaneous collection and analysis of qualitative data, is discussed in more detail. 12. See Chapter 2. 13. See Denzin (1971) for a very useful discussion of how sensitizing concepts guide naturalistic inquiry such as field work. Also see Patton’s excellent discussion of field work strategies and observation methods—Chapter 4 in Patton (2002). 14. Silverman, D. (2006). Interpreting qualitative data: Methods for analyzing talk, text, and interaction (3rd ed.). SAGE. 15. Activity 1 in the supplemental activities at the end of this chapter develops this point by focusing on different purposeful sampling plans (Tracy, 2020) and asking you to work out for each of these sampling plans: a research question that might use this type of sampling strategy; what inquiry paradigm(s) might use that strategy, and which would not; the benefits that each has; and the disadvantages that each has. 16. The report states that [f]or higher education and research in Norway to make use of the potential of technology, . . . using technology . . . [must be] lifted to the strategic level of the institution and integrated into all academic and administrative activities. . . . Development and use of technology in the sector must therefore be anchored in strategies, both at national and institutional level (Kunnskapsdepartementet, 2017, p. 5, our translation, our emphasis).
                 
7 ANALYZING AND INTERPRETING QUALITATIVE DATA PURPOSES AND GOALS OF THE CHAPTER This chapter is the second of four interconnected chapters (Chapters 6, 7, 8, and 9) exploring how to put the methodological thinking associated with, and underpinning, qualitative and quantitative research approaches into practice. The previous chapter (Chapter 6) focused on what you will need to think through and make decisions about when collecting data using qualitative methods. However, qualitative methods and the strategies of inquiry they employ are not just about collecting data. They are also about analyzing and interpreting that data. In this chapter, our focus is the thinking you will need to do and the decisions you will have to make related to how you will analyze and interpret the qualitative data you collect. Throughout the discussion we emphasize that the collection and analysis of qualitative data is a simultaneous and iterative strategy. Analysis of qualitative data begins when that data is being collected. Data collection and analysis continues iteratively until trustworthy and credible interpretations of that qualitative data can be made. Developing an iterative qualitatively driven analytic strategy involves making decisions about how you will organize the data you collect, as well as how you will justify and keep track of the decisions you make when analyzing that data. We explore how the decisions you make when designing your research affect the credibility or trustworthiness of the interpretations you make based on the data that you have collected and analyzed. For example, how will you know that you have “enough” data, or have done “enough” analysis, to be able to make credible and trustworthy interpretations of that data? Throughout the chapter, we establish some common features, and principles, of qualitative analysis and interpretation. However, at the same time, we highlight, given the variation in types of qualitative approaches and methods, how these common features and principles are put into practice varies. The goals of the chapter are to • Establish that analyzing qualitative data is an iterative and reflexive process that is fundamentally inductive. • Emphasize that data analysis and data collection occur simultaneously in qualitative research approaches. • Highlight that analysis of qualitative data is underpinned by the “art of interpretation” (Denzin & Lincoln, 2000, p. xii). 151
152  Research Design • Stress that analyzing and interpreting qualitative data are strategies, not procedures. • Illustrate why design considerations related to analyzing and interpreting qualitative data cannot be reduced to, or captured by, a fixed linear series of prescribed steps or phases. • Establish some recurring features and principles common to many approaches to analyzing qualitative data. • Identify practical strategies for managing and organizing the large amounts of data that you obtain such as transcription of interviews. • Explore the role that strategies such as coding and writing memos play in many qualitative analyses. • Demonstrate how the decisions related to the collection and analysis of data that you make when designing your research affect the credibility or trustworthiness of the interpretations you make based on that data and those analyses. • Explore how decisions about how many participants to include in a qualitatively driven research study are guided by the principle of obtaining in-depth and rich data that enable credible interpretations to be made, rather than the number of participants as such. ANALYSIS OF QUALITATIVE DATA: AN ITERATIVE AND DYNAMIC STRATEGY Analysis of interview, observational, or textual data does not simply describe what people said in an interview, did when they were observed, or what a text depicts. Rather, analysis is about making sense of what was said, observed, or depicted to address or answer your research questions. Making sense of such data requires “thinking about what the data mean” (Patton, 2012, p. 349). Thinking about what the data mean requires you to become very familiar with, or immersed in, the data you have collected. You will need to spend a lot of time with it—reading it, thinking carefully about what you have read and what it means, reading it again, and so on. In this iterative process, you examine and reexamine your data, collect more data when necessary, and describe, analyze, and reanalyze that data in order to make sense of it. For example, if you use some form of qualitative interviews to obtain your data, then you continually, or constantly, compare1 what you have found in one set of data (such as an interview) to other sets of data (such as other interviews) + continually compare and refine your hunches or tentative interpretations of that data and how they relate to what it is that you want to know about (your research questions) + compare those tentative interpretations to other relevant theoretical and empirical work in the area (what have others found and how does what you have found relate to this work). As a result of this iterative and reflexive process, “the researcher’s understanding grows so that she or he can begin to create models or diagrams of the relationships in the data, make links with the literature, seek relationships among concepts or categories” (Mayan, 2009, p. 88). Such analytical work is a central part of the “art of interpretation” (Denzin & Lincoln, 2000, p. xii), which is a central tenet of the craft of qualitative inquiry. The analytical work that you do “creates the justification, provides the evidence for the
Chapter 7 • Analyzing and Interpreting Qualitative Data   153 interpretations that result from that work, and sets the stage for the organization and style of presentation for those interpretations” (Freeman, 2017, p. 123). PUTTING IT INTO PRACTICE PUTTING AN ITERATIVE AND REFLEXIVE ANALYSIS STRATEGY INTO PRACTICE Putting an iterative analysis strategy into practice is not straightforward. There are no linear steps to follow. Rather, the strategy is comprised of reflexive cycles of constantly thinking about the data we are collecting and the interpretations we are making of that data. So how might you put this sort of strategy into action? Agar’s (2008) description of how an iterative analytical process unfolds provides you with a way forward when thinking about how to undertake such a nonlinear analytic process. We have broken his original quote into dot points to indicate the elements of such a process: • • • • you learn something (“collect some data”), then you try to make sense out of it (“analysis”), then you go back and see if the interpretation makes sense in light of the new experience (“collect more data”), then you refine your interpretation (“more analysis”) and so on. (text from Agar, 2008, p. 62, dot point structure added) Such a process will require you to keep asking yourself a lot of questions, both about the data itself and what you are saying about it and using that data for. Figure 7.1 below provides a visual illustration of how an iterative analytical process unfolds, according to Agar (2008). FIGURE 7.1 ■ How an Iterative Analytical Process Unfolds You learn something (“collect some data”) You try to make sense out of it (“analysis”) You go back and see if the interpretation makes sense in light of the new experience (“collect more data”) and so on You refine your interpretation (“more analysis”) and so on At this point, you also might find it useful to revisit the discussion in Chapter 1 about the ideas of iteration and reflexivity as they are central to any discussion of analyzing qualitative data, including the discussion in this chapter. WHEN DOES ANALYSIS “BEGIN” WHEN DESIGNING AND CONDUCTING QUALITATIVE RESEARCH? Given our previous discussion (see box above) of how you might go about putting an iterative analysis strategy into action, it should not surprise you that when using qualitative research approaches, data collection and analysis is a dynamic process that occurs
154  Research Design simultaneously (Merriam & Tisdell, 2016). Therefore, data analysis “begins” as you are in the process of collecting your first qualitative data. For example, if you are collecting data using some form of qualitative interview, then analyzing information obtained from that interview begins as the interview is taking place. The use of probes2 such as, “What did you mean by that?” or “Am I right in understanding that what you are saying is . . .” or “How does what you just said fit with what you said previously about X?” indicates that the interviewer is listening to, and at the same time analyzing and trying to make sense of, what is being said as the interview unfolds. Such probing and clarification of what was said, and why, continues throughout the interview. When the interview is completed, you continue your analysis of it. For example, immediately after the interview ends, you may make notes of your reflections of what stood out to you from what the participant said in terms of it being relevant to your research problem or questions. If you have conducted other interviews prior to that interview, you may also make notes about areas or ideas that seemed to be the same as, or different from, other interviews, and make a note or memo to yourself to check this out further. These notes capture your tentative hunches about what might be going on in this interview in terms of what it is telling you about what it is that you want to know more about and how this relates to what other interviews have told you. In addition, as soon as possible after each interview you will usually produce some form of written record or transcript of that interview, read it line for line, and highlight parts of the interview that are information rich in terms of an aspect or aspects of the questions being asked. When doing so, you will make notes about your impressions of how what you have just highlighted in the interview transcript relates to your research questions. You will also make notes about how what you have highlighted relates to sections that you have highlighted in earlier interviews. You will undertake a similar process of analysis for each interview you conduct. At the same time, you will continually reflect on, and compare, the information gained from each specific interview to all the other interviews that make up the data for the study. When doing so the focus is “making connections or asking questions about why something is the way it is. We are entertaining theoretical notions about the phenomenon” (Mayan, 2009, p. 89). Making such connections adds depth to your analysis and lifts it beyond a commonsense description of what each person said in the interview. It is an important part of the “art of interpretation” (Denzin & Lincoln, 2000, p. xii) that underpins qualitative analysis. TIP CODING These notes, and highlighted parts of the transcript text, are the parts of the interview that you identify as being potentially relevant and useful in some way for addressing your research problem. This process is often referred to as coding. Coding is a process used in many types of qualitative research approaches. We will return to explore the idea of coding in more depth later in the chapter. Using Memos to Capture Your Analytic Thinking and Hunches Often you will see the notes you make about the interview, and your hunches about how that interview might relate to other interviews you have conducted and begun analyzing, called memos. Writing memos enables “thinking on paper” (Maxwell, 2013, p. 20) as you
Chapter 7 • Analyzing and Interpreting Qualitative Data   155 try to capture the insights you have gained when describing and analyzing what people have told you in an interview. The idea of memos was introduced in a type of qualitative research known as grounded theory by Glaser and Strauss in 1967. They observed that when “generating theory it is often useful to write memos on . . . the copy of one’s field notes. Memo writing on the field note provides an immediate illustration for an idea” (Glaser & Strauss, 1967, p. 108). Later, Glaser defined memos as “the theorizing write-up of ideas about codes and their relationships as they strike the analyst while coding” (Glaser, 1978, p. 83). Since then, the idea of memoing and writing memos has come to be used by qualitative researchers more generally, not just those generating grounded theory. In this case, the term memo is used to refer to the preliminary and developing analytical notes or hunches written by a qualitative researcher during analysis of data. Keep in mind that when you begin to write memos, the first memos you write might seem “awkward and simple. . . . Later in the analysis, memos become longer and take on more depth” (Corbin & Strauss, 2015, p. 121). Early on, when you are in the beginning stages of your analysis, it is likely that your memos will take more the form of descriptions and beginning hunches about some aspect of the data collection process. As your iterative analysis proceeds, your memos will change from being mainly descriptive to more analytic and interpretive in focus. PUTTING IT INTO PRACTICE WHAT DO YOU WRITE MEMOS ABOUT AND WHAT FORM SHOULD THEY TAKE? There is no set form that a memo must take or what it must be about. Memos might be descriptions of what happened when you were collecting your data, your hunches about what is going on related to your research problem or questions, or your thoughts about what you will need to find out more about. Memos can also be used to highlight consistencies or inconsistencies in the data collected that require further elucidation or follow-up. For example, if you are using some form of qualitative interview to obtain data there might be consistencies or inconsistencies within an individual interview, or that emerge across interviews, as you collect more interview data. You can use memos to capture your thoughts about this as you read the transcript of, or reflect on, the sound file of the interview. An example of this is the memo written during one of the interviews with one of the participants in the study one of us was involved in about the process of searching for and selecting an aged care facility following the discharge of a family member from an acute setting. The focus was participants’ perceptions of the search and selection process and its effects (see Cheek & Ballantyne, 2001). This participant, like every other person we have interviewed so far, mentioned that they had to fill in different forms for each aged care home when applying for a space for an older person and that this caused them great stress. Some mentioned having to apply to 10 or more residential care homes to try to get a place. Maybe we need to think more about the effect of something that can seem so ordinary such as the filling in of an application form. How has this situation arisen? What is this an example of and what is causing and sustaining it? Maybe need to check other studies to see if this has come up in them. Can also
156  Research Design probe more in next interviews—perhaps even return to those already interviewed to seek more clarification????? (memo related to lines 50–55 of the transcription of interview with participant X, taken from private study notebook, Cheek & Ballantyne). This memo is about a surprising consistency that we identified emerging across the interviews and our thinking about what this might mean in terms of what else we might need to find out about. Ultimately, after more data about this was obtained and much more thinking was done, the series of actions that this initial hunch captured in this memo led to the emergence of one of the key findings of the study. This was the stress caused by dealing with an uncoordinated system, which “meant that each form had to be collected or requested, filled out, and returned to each facility” (Cheek & Ballantyne, 2001, p. 231). The statement from the interview that this memo was made about when we read it was [h]alf of them wanted them to be signed by a justice of the peace and have a doctor’s signature. Everyone wanted a doctor’s report, so you’d have to go back to the doctor to get a doctor’s report and everyone was writing differently. . . . Instead of treating them all the same, so that we could photocopy them and send them out, they all wanted something different. (Cheek & Ballantyne, 2001, p. 231) Our initial code for this section of text was lack of system and coordination and it became part of the overarching theme we called “Dealing With the System—Cutting Through the Maze” (see Cheek & Ballantyne, 2001, pp. 228–231). Why You Should Not Wait to Begin Analyzing Your Qualitative Data Until All Your Data Is Collected The previous discussion highlights that the analysis of qualitative data, for example a single interview, begins as you are undertaking that interview, and continues throughout the entire process of the research as more interviews are conducted. In this process, you will return to each of your interviews and compare your initial analyses of them with the analyses you make about each new interview that you have just done. Do the insights gained from new interviews change in any way those gained from earlier ones? Are they in keeping with them? Are there new or different insights emerging from a new interview? This type of reflexive iterative thinking involves “an ongoing dialogue with and between data and ideas” (Coffey, 2018, p. 25). For example, you may add questions to your interview schedule because of responses (possibly unanticipated) that you have received and now need to know more about. You cannot do this if you wait to begin the analysis of your qualitative data until all your data is collected. TIP DIFFERENT WAYS OF THINKING ABOUT THE TIMING OF COLLECTING AND ANALYZING DATA This is a very different way of thinking about the timing of, and strategy for, the collection and analysis of data compared to quantitative research approaches. In quantitative research approaches, the collection of all the quantitative data occurs in a
Chapter 7 • Analyzing and Interpreting Qualitative Data   157 standardized way and precedes the analysis of that data. This is because in quantitative research approaches, all the data (i.e., all the numbers) need to be collected before statistical rules and procedures can be used to analyze those numbers and determine what type of conclusions can be drawn from that analysis. These statistically derived rules and procedures are standardized and are comprised of very structured and clearly articulated steps to be followed linearly. DEVELOPING AN ITERATIVE QUALITATIVELY DRIVEN ANALYTIC STRATEGY Given the iterative nature of the analysis and interpretation of qualitative data, it is not possible to give you a prescribed set of standardized procedural rules or instructions to be followed in all instances when analyzing the qualitative data that you have collected. Design considerations related to analyzing and interpreting qualitative data cannot be reduced to, or captured by, a fixed linear series of prescribed steps or phases. However, although qualitative analysis cannot be reduced to a linear series of steps, or a normative or standardized checklist of what to do, when and how to do it, this does not mean that analysis of qualitative data is in any way a “free for all” where anything goes. There are “features that recur in many approaches to qualitative analysis” (Miles et al., 2014, p. 10). For example, no matter what type of qualitative data you are analyzing, or which specific qualitative approach you are using in your study design, you will simultaneously need to develop strategies related to • organizing the data you collect, and keeping track of your analytical thinking about that data, and • deciding on some sort of process for making analytical choices about what parts of the large body of data you have collected are relevant for addressing your research questions. In this section of the chapter, we will explore some of the strategies developed and used in many approaches to qualitative analyses to address the dot points above. Strategies for Organizing the Data You Collect and Keeping Track of Your Analytical Thinking About That Data When you are collecting qualitative data, you will need to think about what form that data will take and how it will be organized. At the same time, you will also need to think about how you will organize and keep track of your iterative analysis of that data, including any memos that you make related to that data. In other words, you will need to make choices about how you will sort and compile your data, including the record of the process of its analysis. Such a compilation will provide you with a type of dynamic qualitatively derived “database” (Yin, 2016, p. 186) of both your initial data and your progressive analytical thinking about that data. For example, if you are using interviews to collect your data, you will need to think about how you will organize that interview data. It is highly likely that during each interview you will make a record of what is said using some form of recording device. It makes the interview data much easier to work with if the sound file of the interview is converted
158  Research Design into written text as soon as possible after the interview. This is called transcribing the interview. On the surface, transcription may seem to be a simple exercise of converting spoken words to written words to provide written texts that can then be analyzed—a prelude to the analysis. But this is not the case. Transcribing an interview is more than a mechanical process or a prelude to the analytical process. Rather, producing the transcript is part of the analysis itself. During the process of transcribing an interview, you revisit what happened in that interview while becoming more familiar with both what was said and the interactions that led to it being said. In other words, you immerse yourself in that interview data and begin to analyze it. Listening to the sound recording of an interview and producing the written wordby-word transcript of it provides a way of reflecting on that interview. When listening to the interview, you can think about aspects of what happened during the interview: Why was that said? Why was there silence at that point of the interview? Why did this question seem to be so important to the participant? These reflections can be captured in memos you make to yourself as you are producing the transcription. When doing so, you begin “making connections or asking questions about why something is the way it is” (Mayan, 2009, p. 89). In other words, you are in the process of analyzing that data. On a more practical level, you will need to think about how you will organize your transcribed data so that such a large amount of data is able to be navigated effectively— both within, and across, interviews. One way of doing this is to number each interview (e.g., Interview 1, 2, and so on) as well as numbering each line of the transcript so that it is possible to readily find, and refer to, specific segments of the text (e.g., Interview 5, line 250). You will also find it useful to write a separate short description of who was interviewed, including any relevant demographic data about that participant, and any other contextual details you think are relevant or useful. However, when doing so it is important that you consider and think through the ethical issues we discussed previously in Chapter 2 that may be raised by linking demographic and interview data and thereby inadvertently violating participant confidentiality and anonymity.3 TIP ANOTHER THING TO THINK ABOUT Even more issues arise if for some reason the researcher does not transcribe the interview themselves and instead, for example, the interviews are transcribed by a research assistant, another researcher in the team who did not conduct the interview, or a paid transcription service. See Liamputtong (2013, pp. 66–68) for a good discussion of these issues and what you will need to think about when addressing them. It is also important to transcribe (if necessary) or systemize the way that you compile any memos you have made during or after interviews, or during the transcription process. Each memo should be given numbers, labels, or descriptors that make it easy for you to find both specific memos (e.g., Memo 4 made 5/1/2022 after Interview 2 in relation to . . .) and include the specific parts of the interview that the memo relates to (e.g., Memo 4 related to Interview 2, line 2–7).
Chapter 7 • Analyzing and Interpreting Qualitative Data   159 Strategies such as these ensure that as you develop your inventory, or type of data base, of memos and interviews, you are building a rich depository of information that can be navigated effectively because it is organized in a careful and systematic way. Without some sort of system such as the ones outlined above, it will be easy to have a sense of drowning, rather than being immersed, in your data! Strategies for Deciding What Parts of the Data You Have Collected Are Relevant for Addressing Your Research Problem In any qualitative analytical strategy, you will make choices about what parts of the large body of data you have collected are relevant for addressing your research questions. When doing so, you will use some sort of strategy for data condensation. Data condensation refers to “the process of selecting, focusing, simplifying, abstracting, and/or transforming the data that appear in the full corpus (body) of written-up field notes, interview transcripts, documents, and other empirical materials” (Miles et al., 2014, p. 12). TIP CONDENSATION OR REDUCTION? Sometimes you will see this process referred to as reduction of data. We use the term condensation, rather than reduction. This is because it better captures the idea of putting data together, or condensing it, to create richer and more nuanced data. Reduction, on the other hand, can give the impression of taking something away from that data, or making it lesser in some way (Miles et al., 2014). This process of data condensation begins the moment we start making research design–related choices about our qualitative inquiry—not just when we are analyzing the qualitative data we have collected. As Miles et al. (2014) point out, Even before the data are actually collected, anticipatory data condensation is occurring as the researcher decides (often without full awareness) which conceptual framework, which cases, which research questions, and which data collection approaches to choose. As data collection proceeds, further episodes of data condensation occur. . . . The data condensing/transforming process continues after the fieldwork is over, until a final report is completed. (p. 12) Therefore, when you condense the data that you have collected, you are not just making that volume of data more manageable. You are also making analytic choices about what parts of that data are relevant and can contribute to addressing your research question(s). For as Wolcott (1994) reminds us, “Everything has the potential to be data, but nothing becomes data [including what is said in interviews, done in observations, or written in texts] without the intervention of a researcher who takes note—and often makes note—of some things to the exclusion of others” (pp. 3–4). The Process of Data Condensation If, for example, you have collected qualitative data using some form of interview, then the first thing you will need to do is to read the transcript of each interview and any memos or
160  Research Design observations that you have made related to the collection of that data. This enables you to get a “sense of the whole” (Patton, 2002, p. 440),4 or what is there, in terms of the data you have collected. As you read and re-read those texts, you will simultaneously begin to think about what things stand out, or seem important, with respect to the research questions you are asking. One way you might think of doing this is by highlighting sections of your transcribed interview texts that relate to the particular lines of inquiry that you are interested in, and around which your interview guide was constructed.5 In this way, your lines of inquiry provide an organizing construct for what you highlight. For example, you might highlight in green the sections of text on a printed, or electronic, version of the interview transcript that seem to be related to line of Inquiry 1, orange for sections of text related to line of Inquiry 2, and so on. The chunks, or sections, of text that you highlight are ones that you have identified as meaningful in some way in relation to your lines of inquiry, and hence research questions or aims. Put another way, the sections of highlighted text are units of data that are “a potential answer or part of an answer to the question(s) you have asked in this study” (Merriam & Tisdell, 2016, p. 203). Instead of working with pen and paper or using text highlighting on a digital transcription on your preferred word processor software, you might choose to use one of the computer software programs that have been developed to assist in selecting or presenting sections of text that might be relevant to your lines of inquiry. However, if you do choose to do this, it is critical to remember that such programs do not do the analytical thinking and interpretation for you. Rather, the “basic idea behind these programs is that using the computer is an efficient means for storing and locating qualitative data” (Creswell, 2014, p. 195).6 It is important to remember that no matter which way you choose to work—“low tech” (highlighter pen and paper) or “high tech” (software programs)—the outcome will only be as good as the thinking behind why you highlighted certain chunks, or sections, of text in the first place. These highlighted sections of text “should reveal information relevant to the study and stimulate the reader to think beyond the particular bit of information” (Merriam & Tisdell, 2016, p. 203). Examples of thinking beyond the particular bit of information include asking yourself questions such as these: How might this unit of text be explained? What could help me explain it? Why was this said and not that? Does this mean that . . . ? Such a questioning process “sharpens, sorts, focuses, discards, and organizes data in such a way that ‘final’ conclusions can be drawn and verified” (Miles et al., 2014, p. 12). PUTTING IT INTO PRACTICE USING COMPUTER-ASSISTED QUALITATIVE DATA ANALYSIS SOFTWARE (CAQDAS) There is a variety of software designed to assist qualitative data analysis. These include ATLAS.ti, MAXQDA, NVivo, and many more (see page 48, Saldaña, 2021, for a more comprehensive list of software to explore). Computer-assisted qualitative data analysis software (CAQDAS) is specifically designed to assist researchers when
Chapter 7 • Analyzing and Interpreting Qualitative Data   161 “organizing, managing, coding, sorting, and reconfiguring data—both transcribed textual documents, PDFs, photographs, online data, and digital audio/video files” (Tracy, 2020, p. 242). While such software may help you “find, categorize, and retrieve data and texts more quickly than using a manual search” (Liamputtong, 2020, p. 269), in themselves they will not analyze the data: Just as word-processing programs like Microsoft Word do not write a paper and presentation, programs like PowerPoint do not, by themselves, design a slide show, CAQDAS does not analyze data on its own. Rather, CAQDAS facilitates qualitative data analysis—just as word-processing software facilitates writing and presentation software eases presentation design. (Tracy, 2020, p. 242) In other words, the need for immersion in the data accompanied by iterative and reflexive thinking and deep mental involvement in the total body of data is not reduced just because you choose to use some form of software to assist you when analyzing qualitative data. For example, if your data include transcripts of qualitative research interviews, your total body of data will include many(!) pages of text. While CAQDAS enables coding text while working on a computer (no need for manually writing codes in the margin of interview transcripts, for example), and retrieving and grouping all chunks of text marked with the same code (no need for manually copying text associated with the same code and then pasting them together in a separate document), you will still need to read the transcripts, and work iteratively with the data, as well as come up with the codes. Deciding whether to use CAQDAS to assist your analysis is a matter of cost, preference, and how much time you will need, or have, to spend mastering new software. Bryman (2016) reminds us that CAQDAS may be too expensive for your personal purchase, but if you have a free access to the software, you may like to try. If you plan to use it in future research, it may be worthwhile taking the time to learn. (p. 603) To sum up: The key point to remember is that any CAQDAS will not in itself analyze the qualitative data. It may assist in organizing and keeping track of that analysis but even then, this will only be as good as the parameters that you enter to guide the software’s functions. It is you who will need to do the hard thinking work that underpins qualitative analysis. It is also you who will have to justify the way that the CAQDAS program has organized your data. Therefore if, when reporting your research, you simply state that the data you collected were analyzed using some form of software, “reviewers will know that you lack knowledge about data analysis in qualitative research” (Liamputtong, 2020, p. 269). CODING—A STRATEGY TO CONDENSE YOUR DATA In many forms of qualitative analysis, but not all,7 the process of sharpening, sorting, focusing, discarding, and organizing data is associated with the strategy of coding your data. Coding refers to “the active process of identifying, labelling and systemizing data as belonging to or representing some type of phenomenon” (Tracy, 2020, p. 234). When coding, you will identify a segment of data, such as a word or a series of words in an interview transcript, and give that segment of data an initial label or code that captures what the segment of data is about or relates to. Thus, When you assign a word to a part of your data, when you write something in the margin of a transcript, when you underline a word in a document, when you focus
162  Research Design on a specific part of a visual, you are coding. . . . Coding is the first step in being able to say something about the data. . . . It is the first step in enabling you to make comparisons among pieces of data (Mayan, 2009, pp. 88–89). For example, in the study mentioned earlier in this chapter about the process of searching for and selecting an aged care facility following the discharge of a family member from an acute setting, the focus was participants’ perceptions of the search and selection process and its effects (see Cheek & Ballantyne, 2001). Therefore, when reading the transcripts of those interviews, you would highlight the sections of text in an interview transcript where the person being interviewed talks about something related to, or a specific aspect of, that experience. For example, they might tell you, “I had to sell most of the furniture and personal items I had for many years.” Thinking about what this is an example of, you may assign it the initial code LOSS. Later in the interview you might highlight another section of text where they told you, “I knew I could not stay and had to move.” You may assign this the initial code RESIGNATION. You will continue to do this as you work through your interview transcript. This phase of initial coding of your data is sometimes called first cycle (Saldaña, 2021) or primary-cycle (Tracy, 2020) coding. However, just having developed initial8 or tentative codes for segments of your data does not mean that your thinking about those initial codes is finished. In fact, in many ways, your thinking about those codes has just begun. Throughout the iterative process of coding, you will collect more data, identify more codes within that new data, write memos about those codes, and compare these additional codes and memos to those you already have. For example, if using interviews to collect your data you will constantly revisit your initial codes and memos in order to refine or confirm them by comparing them with the codes and memos you develop when analyzing the next interview. When doing so, you will look for patterns in the codes across the interviews and will begin to undertake a process of what Yin (2016) refers to as a “[r]eassembling phase” (p. 202) of iterative data analysis. As you become more familiar with, and think more about, the codes that you are giving to the segments of text that form your units of analysis, you will begin to look for patterns or regularities in those codes in terms of what they are instances or examples of. To do so, you will need to “organize, synthesize, and categorize these codes into interpretative frameworks” (Tracy, 2020, p. 234). This is second cycle (Saldaña, 2021) or secondary-cycle (Tracy, 2020) coding. This type of coding requires “such analytic skills as classifying, prioritizing, integrating, synthesizing, abstracting, conceptualizing, and theory building” (Saldaña, 2021, p. 89). It requires you to ask questions of yourself such as, Are there segments of text (i.e., codes or parts of memos) that all talk about or relate to a particular aspect of a line of inquiry you are interested in finding out something about? Are there common understandings and assumptions in the segments of text that you are thinking of putting together? Or are the segments of text you have coded different instances or examples of the same idea? If they are, could the codes related to these segments of data be grouped together, or reassembled, to form a category—a higher level of focus that captures what the codes in that category are instances of? The development of categories, like coding, is an iterative process. As you collect more units of data, and new and different codes emerge, you also revisit your tentative categories and modify them in light of the emergent codes. You compare the developing and tentative categories to each other as they are being developed. This can then lead to the emergence of major themes that capture your findings.
Chapter 7 • Analyzing and Interpreting Qualitative Data   163 When developing categories, it is important to keep in mind that a category is not just a collection of codes. It is also a collection of the ideas behind those codes. This is imperative if the formation of categories is to move beyond the mechanical clumping of codes together, and if the description and understandings of themes or categories are to move beyond the superficial description of each individual code in that category. TIP USING TERMS WISELY AND CONSISTENTLY WHEN DESCRIBING YOUR ANALYTIC STRATEGY You will soon notice that some researchers use the terms codes, categories, themes, and findings interchangeably in discussions of qualitative data analysis or reports of research that used some sort of coding strategy. For example, Merriam and Tisdell (2016) declare that “in our view a category is the same as a theme, a pattern, a finding, or an answer to a research question” (p. 204). On the other hand, Mayan (2009) asserts that often there is confusion and a lack of understanding about the difference between categories and themes: Themes are thoughts or processes that weave throughout and tie the categories together. Theming, then, is the process of determining the thread(s) that integrate and anchor all of the categories. To form themes, the researcher returns to the “big-picture” level and determines how the categories are related. . . . Through the categories and then the themes, the researcher can make overall conclusions about the research (p. 97). The point that we are making here is that terms such as codes, theme, category, findings, and conclusions, while they may sound like everyday words, cannot be used in a commonsense or everyday way when talking about and describing qualitative analysis. In this context, they are research method derived terms related to specific (although, as we have seen, at times different) ways of thinking about complex aspects of analyzing qualitative data. Therefore, they must be used thoughtfully, with care, and with acknowledgment of the complexity that their seeming simplicity masks. This applies whether or not you are using coding as your principle analytic strategy. Given the different views about terms such as codes, theme, category, findings, and conclusions, it is important that you think through these terms (like all others that you use as part of describing your research design), decide on the way that you will use them, declare this, give reasons for that choice, and then be consistent in that use. MORE CHOICES AND DECISIONS TO MAKE WHEN PUTTING CODING INTO PRACTICE Having made the decision to include coding as part of your analytical strategy in your research design, you still have more decisions to make. This is because there is considerable variation in the way qualitative researchers who choose to use coding as part of their analytic strategy put coding into practice. How they put coding into practice is influenced by what they are trying to find out about—their research questions. Methodological Choices About Whether to Employ an Inductive or Deductive Approach to Your Coding One of the choices you will have to make is whether you will employ an inductive or deductive approach when coding your data. This choice is related to the purpose of the
164  Research Design research and what type of knowledge will be able to meet that purpose. Given the exploratory and interpretive nature of qualitative inquiry, most qualitative researchers, at least initially, will employ inductive logic or thinking when coding. This is because working in this way, they can remain “open to what the site [or the person or the text] has to say rather than determined to force-fit the data into preexisting codes” (Miles et al., 2020, p. 74). These initial codes are developed inductively from the researcher’s reading of, and thinking about, the data. The goal is to get a sense of the whole or what is going on in the situation of interest related to the problem or research questions that their research is about. What is happening? Why? How? These are the types of questions the researcher will ask themselves as they constantly think through their data, as well as the codes, categories or themes, and initial tentative interpretations that they have inductively developed. Other qualitative researchers, from the outset of the coding process, may use a form of deductive coding. They look for the presence of codes that are instances of the specific aspects or characteristics of their research problem that they identify as being the ones of interest. Therefore, they begin their coding strategy with a list of codes they have developed before reading the data or before any analysis has taken place. The rationale for this development is that the “conceptual framework, research questions, and other matters of research design suggest that certain codes . . . are most likely to appear in the data you collect” (Saldaña, 2021, p. 40). One device that may be used as part of a deductive coding strategy is some form of codebook or list of predefined codes that the researcher specifically looks for when reading the transcripts of interviews or field notes of observations. The codebook is “a compilation of the codes, sometimes accompanied with their content descriptions, and a brief example for reference” (Saldaña, 2021, p. 41). The researcher looks specifically for examples of those codes in the data. The difference between these two approaches to coding can be summed up in the following way: “[T]he deductive researcher starts with a preliminary list of codes, and the inductive researcher ends up with one” (Saldaña, 2021, p. 41). The researcher who has developed their inductively derived list of preliminary codes can then use this set of preliminary codes to guide their second-cycle coding and thereby working more deductively. They may even choose to use some form of tentative and evolving “codebook” developed from their inductively derived codes when doing so. PUTTING IT INTO PRACTICE WHAT YOU WILL NEED TO THINK ABOUT IF YOU CHOOSE TO DEVELOP A CODEBOOK OF SOME SORT If you do decide to use some sort of codebook, what might this codebook look like and how will you develop it? There is not just one way to develop a codebook. Codebooks vary in terms of how much detail they contain for each code, and also how they are used. They also vary, as we have seen, in terms of when and how the codes are developed—“[T]he deductive researcher starts with a preliminary list of codes, and the inductive researcher ends up with one” (Saldaña, 2021, p. 41). Guest et al. (2006) provide a good description of how they developed a codebook using what they describe as a standard iterative process. Each code in the codebook had
Chapter 7 • Analyzing and Interpreting Qualitative Data   165 (1) a “brief definition” to jog the analyst’s memory; (2) a “full definition” that more fully explains the code; (3) a “when to use” section that gives specific instances, usually based on the data, in which the code should be applied; (4) a “when not to use” section that gives instances in which the code might be considered but should not be applied (often because another code would be more appropriate); and (5) an “example” section of quotes pulled from the data that are good examples of the code. (Guest et al., 2006, p. 64) An example of putting (1)–(5) above into practice is provided by Creswell (2016) using the hypothetical example of a code called Safety used in a codebook. See Table 7.1. TABLE 7.1 ■ Code Safety The Hypothetical Example of a Code Called Safety Brief Definition Full Definition When to Use When Not to Use Sample Questions Whether students felt safe after the incident Safety as identified through possibility of the incident occurring again, safety in their dorm rooms or on campus When students actually mention the word safety or a synonym When it cannot be reasonably interpreted as the students’ talking about safety or their security “I was scared for my own safety.” “The campus was no longer a safe environment for me or my friends.” Source: Taken from Creswell (2016, p. 197, Table 23.1: A Sample Codebook Organization Based on Guest, Bunce, and Johnson [2006]). Reproduced with permission. If you would like to read more about the development of codebooks, useful places to start are • • the article by Jessica T. DeCuir-Gunby et al. (2011), “Developing and Using a Codebook for the Analysis of Interview Data: An Example from a Professional Development Research Project.” The authors discuss how they used codebooks to analyze data from semistructured interviews including how they developed the codes from the data (i.e., data-driven codes) and provide a sample of such data-driven codes, including descriptions and examples. Sarah Tracy’s (2020) discussion of focusing and creating a codebook provides a useful overview of developing a formal codebook: “[A] data display that lists key codes, definitions, and examples that are going to help you guide your analysis” (p. 221). She includes an excerpt of a codebook (p. 222) that she used in a study she was part of about “male executives’ viewpoints on gender, work and life.” (p. 221)9 Choosing a Coding Strategy Congruent With the Theoretical Pillars of Your Design As we highlighted previously in Chapters 4, 5, and 6, the theoretical and disciplinary assumptions that we bring with us to the research design table provide a lens through which we make choices about our research design. This includes the choices we make about what analytic strategies will form part of that design. Therefore, given the variation in the disciplinary and theoretical assumptions that underpin qualitative research approaches, it is not surprising that there are differences between these approaches
166  Research Design related to if, and if so how, codes and coding are used as part of a specific type of qualitative analytical strategy. Coding in Grounded Theory For example, if you are using grounded theory (Charmaz, 2006, 2014; Glaser & Strauss, 1967) as the whole-of-study theoretical and methodological approach underpinning your research design, then your coding strategy will reflect the understanding of coding that grounded theory is premised on. This understanding is that coding is part of an analytic strategy designed to enable you to develop a theory emerging from that strategy able to address your research problem or provide answers for your research questions. The theory is grounded in, and arises from, the analysis and interpretation of the data collected—hence the name grounded theory. Specific types of coding10 and specific ways of memoing are part of the analytical strategy designed to enable that grounded theory to emerge. However, when designing an analytical strategy for a grounded theory study, you will also need to keep in mind that there are different schools of thought with different emphases about what grounded theory is and is for (see Charmaz, 2014; Glaser & Strauss, 1967; Strauss & Corbin, 1998). The way you develop your analytic strategies within grounded theory, including which coding practices you employ (such as constant comparison), or how you use the literature,11 must be congruent with the understandings and emphases of the type of grounded theory that you have chosen to employ in your study. For example, if your grounded theory study is informed by constructivist or interpretative inquiry paradigms, then your analytical strategies will reflect the basic belief that “subjectivity is inseparable from social existence” (Charmaz, 2014, p. 14). You will emphasize developing grounded theory based on interpretative understandings. Rather than assuming the role as a neutral and objective observer, you will acknowledge and engage reflexively with the subjectivity you bring to your study, and “examine rather than erase how [your] . . . preconceptions may shape the analysis” (p. 13). This includes how your preconceptions may shape the codes that you develop as part of that analysis. You will actively seek out the social contexts that make up your study and reflect on, and make analytical memos about, that context as part of your data collection and analysis. Thus, the form that your analytic strategy takes reflects your belief that “we construct our grounded theories through our past and present involvements and interactions with people, perspectives and research practices” (p. 17). Therefore, when justifying using some sort of coding as part of your overall analytic strategy in your research design, it is not enough to simply say that you are “doing,” for example, grounded theory or “using” a grounded theory approach and assume that this makes it clear what you are “doing” or “using.” Rather, you will need to explain why your study is informed by a specific approach to qualitative inquiry, such as grounded theory, and how you have put that approach into practice. For example, what type of grounded theory have you chosen to use and why? How is the strategy that you have chosen for analyzing the qualitative data that you have collected congruent and consistent with that choice? This includes how and what you code, what you do with those codes, and what interpretations you make based on them. To be able to do this will require hard work, and a lot of thinking on your part.
Chapter 7 • Analyzing and Interpreting Qualitative Data   167 TIP LEARNING FROM OTHERS Nagel et al. (2015) provide an excellent account of what they needed to think about when navigating the different ways that grounded theory is thought about and designed when they were doing doctoral studies using grounded theory. They describe such navigation as “weaving through a myriad of paths in a landscape of varied and divergent perspectives of ontological and epistemological philosophies on GT, requiring navigational skills we would not have anticipated at the outset of our dissertation work” (p. 366). The discussion that they provide about how they navigated this myriad of paths provides an excellent example of what is often missing from reports of research, namely what they needed to think about when putting grounded theory into practice, including the analytic strategy they would employ, and why. Rounding Off Our Discussion of Coding The key point that we have been making in this section is that there is variation in coding strategies, just as there is variation in the overall analytic strategies employed in qualitative inquiry. This variation in coding strategies occurs both within specific qualitative approaches (e.g., between different types of grounded theory) and between different types of qualitative approaches. To be able to employ a coding strategy “well” requires you to know a lot about the idea of codes and coding themselves in order to be able to think through and make choices about what employing them “well” in the context of your particular research design means. This includes ensuring that the choice of using coding as part of your analytical strategy at all is congruent with the methodological and theoretical traditions that your research design is premised on. Equally, employing a coding strategy “well” requires you to think about how you will design that strategy. How will you put the idea of coding into practice in such a way that it is congruent with the purpose of your research and the qualitative approach you are using? For example, in other qualitative approaches you are not developing codes, or employing coding strategies, to produce emergent grounded theories. Rather, both the type of codes that you develop, and the strategy that you use when developing them, will reflect the theoretical and disciplinary traditions of the qualitative research approach that you have chosen to underpin your research design. Finally, when designing your coding strategy, you will need to be aware of the critiques of using coding and codes (see tip box The Importance of Thinking About Why to Use Coding, Not Just How to Do It below) and be able to justify the position that you take up in relation to them in that coding strategy. It is important to remember that codes or categories or themes arising from the analysis of your data in themselves are not the findings of the study. The “art of interpretation” (Denzin & Lincoln, 2000, p. xii) that is central to qualitative analysis is not complete just because categories or themes are developed. There is more thinking to do. This thinking is about how all the parts of your analysis fit together, how they relate to, and what they reveal about the problem, issue, or question that your research has been
168  Research Design designed to address or illuminate in some way. Such reflexive thinking is central to all qualitative analytical strategies, not just those employing coding as part of that strategy. We develop this point in the next section of the chapter. TIP THE IMPORTANCE OF THINKING ABOUT WHY TO USE CODING, NOT JUST HOW TO DO IT Some scholars view the idea of coding and using coding strategies when analyzing qualitative data with a great deal of skepticism and suspicion. Coding and coding strategies are viewed as reflecting the “[l]ong reach of logical positivism” (St. Pierre, 2016, p. 19). This is because codes and coding strategies have become taken for granted procedures with little or no thought given to what the use of the idea of codes and coding might assume. Assumptions such as when coding data that “words can contain and close off meaning” in the form of codes, or assumptions that “practices of formalization and systematicity [such as reducing coding to some form of prescribed steps] . . . guarantee rigor” (both quotes from St. Pierre, 2016, p. 26). Related to this critique of emphasis on formalization of coding as a procedure, some qualitative researchers are concerned that “coding routines can produce their own distractions—for example, struggling with the mechanics of the coding process rather than being able to think deeply about the data” (Yin, 2016, p. 200). In other words, coding becomes a mechanistic procedure in itself, rather than part of an overall analytical strategy. The focus is then on following the “right” procedure rather than what the codes and coding are for. This has the effect of distancing you from your data and reducing coding to an instrumental technique. The result is unthoughtful and poor coding strategies. However, as Saldaña (2021) wisely points out, critiques such as those above may be more a critique of not doing “[c]oding well” (p. 22) than critiques of the idea and concept of coding itself. We take the view that coding done well can be an important and central part of the overall analytical strategy used to analyze and interpret the qualitative data that you have collected. We both have used forms of coding in our qualitative research studies. This is because coding is a way that you can demonstrate that your work is rigorous in that it “has adhered to some set of criteria that provides for systematicity, and for public inspectability” (Lincoln, 2015, p. 204). In this way, coding provides a way of showing how your interpretations were arrived at, including the logic behind the choices you made to arrive at those interpretations. Consequently, it provides a type of audit trail for readers of your research to follow (Lincoln, 2015). This audit trail is part of establishing the credibility and authenticity and therefore the dependability and trustworthiness (Lincoln et al., 2018) of your findings and any claims made on the basis of them.12 “If a report is credible, readers feel confident in using its data and findings to act and make decisions” (Tracy, 2020, p 275). THE ART OF INTERPRETATION Using an iterative analytic strategy, you actually begin making tentative interpretations of the data from the moment you begin to collect it. For example, you are making tentative interpretations when you reflect on, analyze, and write a memo about why a person you are interviewing said what they did about what you asked them about. As you simultaneously collect more data and analyze that data, the level of focus of your interpretation shifts. For example, you begin to compare and reflect on the initial interpretations that
Chapter 7 • Analyzing and Interpreting Qualitative Data   169 you have made as you begin to develop categories or themes. You also revisit decisions you made about codes and categories or themes in the earlier phases of the analysis. The focus, and purpose, of your interpretation is not a narrow one, such as interpreting the data in a specific table [or for our purposes here we could add in a single interview, observation, or text]. Rather, the goal is to develop a comprehensive interpretation—still encompassing specific data—whose main patterns and themes will become the basis for understanding your entire study. (Yin, 2016, pp. 220–221) Developing such a comprehensive interpretation requires thinking about how the ongoing analysis or interpretations of the data that you have collected come together into a higher level of abstraction. To do this requires “taking stock of what is being worked with” (i.e., the data you have and any previous analytical decisions you may have made about it) and “a process for making a statement about the topic of inquiry” (i.e., the analytical strategies that you are employing). In other words, developing a comprehensive interpretation requires “some sort of transformation of what is identified, organized, selected, created, recognized into a statement about the topic of the inquiry or ‘findings’” (all quotes from Freeman, 2017, p. 3). It is this transformation which gives rise to the conclusions you make related to your research problem/questions. Such reflexive and demanding work comprises the art of interpretation. It is this intellectual work that enables the thick description13 and thick interpretation (Denzin, 1989a) that qualitative inquiry aims for. Remember, thick description, and the thick interpretation it can give rise to, is not about how much description you have. Rather, thick description is defined by the “kind of intellectual effort” (Geertz, 1973, p. 6) it employs: [T]o thickly describe social action is actually to begin to interpret it by recording the circumstances, meanings, intentions, strategies, motivations, and so on that characterize a particular episode. It is this interpretive characteristic of description rather than detail per se that makes it thick. (Schwandt, 2015, p. 306) Such iterative, careful, and interpretive intellectual work enables you to address questions such as • Can the findings of my research help understand the problem that triggered the research in the first place? Why or why not? • How can the findings be used in practice? • What theoretical concepts can be used to add richness and help explain what I have found? • How does what I have found add to or change our understandings of those concepts or theories? • What implications do the findings have for what else I, or others, might want or need to know about related to this problem? • What ideas might need to be explored in further research? • Can the findings of this study, including any theoretical constructs developed, be applied to other situations or instances besides the one studied using what Yin (2014, 2016) calls a form of analytic generalization ?
170  Research Design • Does, and if so how, the design and conduct of the research offer new or different ways of studying theories, concepts, or issues—is it methodologically significant? (Tracy, 2020) Thinking about these types of questions is central to the systematic and reflexive work needed to establish the credibility of your research and the significance of the findings of, or conclusions reached by, that research. Ways of Establishing the Credibility of the Interpretations You Make and Therefore the Rigor and Trustworthiness of Your Research Establishing the rigor and credibility of your research, including the interpretations you make and conclusions you reach based on them, is closely linked to the idea of establishing the trustworthiness (Lincoln & Guba, 1985) of the research. “What is trustworthiness? The basic issue in relation to trustworthiness is simple: How can an inquirer persuade his or her audiences (including self ) that the findings of an inquiry are worth paying attention to, worth taking account of?” (Lincoln & Guba, 1985, p. 290). Or put another way, how can you persuade the audience of your research that they can trust that your research, and the claims that it makes, are credible and significant? One way to do this is to persuade them of the rigor and credibility of the interpretations you have made about your data. When we talk about rigor in qualitative inquiry, we are taking about “the care and effort taken to make sure that the research is carried out in an appropriate manner” (Tracy, 2020, p. 271) such as, for example, “spending enough time in the field to gain trust; practicing appropriate procedures in terms of writing fieldnotes, conducting interviews, and analyzing data; collecting enough data to support significant findings” (Tracy, 2020, p. 271, dot points in original removed)—a point we will return to in the next section in this chapter. TIP SARAH J. TRACY’S EIGHT “BIG TENT” CRITERIA FOR EXCELLENT QUALITATIVE RESEARCH Tracy (2010, 2020) identifies what she terms eight “big tent” criteria or end goals for excellent qualitative research. She reveals that she developed this model by synthesizing “a number of practices across theoretical traditions and paradigms into an expansive ‘big tent’ framework for high quality qualitative research” (2020, p. 268). In this framework, she identifies eight criteria for the end goal of achieving quality. The eight big tent criteria for quality are worthy topic, rich rigor, sincerity, credibility, resonance, significant contribution, ethical, and meaningful coherence. For each of these criteria Tracy identifies various means, practices, and methods through which to achieve those end goals. For example, means, practices, and methods through which to achieve the end goal of a worthy topic include that the topic of the research is relevant, timely, significant, and interesting (Tracy, 2020).
Chapter 7 • Analyzing and Interpreting Qualitative Data   171 Although there are no hard and fast rules for what a comprehensive or “good” interpretation is, Yin (2016) suggests that one way to achieve a credible interpretation is to consider striving for as many of the following attributes as possible: 1. Completeness (Does your interpretation have a beginning, middle, and end?) 2. Fairness (Given your interpretative stance, would others with the same stance arrive at the same interpretation?) 3. Empirical accuracy (Does your interpretation fairly represent your data?) 4. Value-added (Is the interpretation new, or is it mainly a repetition of your topic’s literature?) 5. Credibility (Independent of its creativity, how would the most esteemed peers in your field critique or accept your interpretation?) (p. 221) TIP TRIANGULATION An approach that is used in some qualitative research designs to establish credibility and therefore trustworthiness of any interpretations made is the idea of triangulation. “[T]riangulation means that you take different perspectives on an issue in your study or in answering your research questions” (Flick, 2020, p. 187). The idea is that by incorporating more than one data source or theory or method or researcher in a study, fuller, or richer, or more nuanced descriptions, analyses, and therefore interpretations, of an issue are possible. “The goal of multiple triangulation is a fully grounded interpretive research approach. . . . In-depth understanding, not validity, is sought in any interpretive study” (Denzin, 1989b, p. 246). Note that in the quote above, Denzin makes it clear that the purpose of using triangulation is to provide in-depth understanding, not validity. It is the enhanced in-depth understanding provided by triangulation that can assist in establishing the credibility of a qualitative study. This is an important point. If you do claim that using different data or investigators or theories or methodologies (the four forms of triangulation identified by Denzin, 1989b). you will need to clearly demonstrate and justify how employing different perspectives (e.g., methods, or theories) can assist in developing in-depth understandings of what you are interested in knowing more about (e.g., a particular issue) + how these in-depth understandings enhance the rigor, credibility, and hence the trustworthiness of the interpretations you make, and on which you base your conclusions. COLLECTING AND ANALYZING DATA—WHEN DO YOU KNOW THAT YOU ARE “DONE”? There are no hard and set rules for when to end the iterative cycles that comprise the process of qualitative data collection and analysis. However, in many (but not all) discussions of qualitative analysis, you will see the idea of saturation used to signal the point at which data collection, and analysis of new data, ceases. An example of a typical definition for
172  Research Design what is meant by saturation is “when, in qualitative data collection, the researcher stops collecting data because fresh data no longer sparks new insights or reveals new properties” (Creswell, 2014, p. 248). You will often see this understanding of the idea of saturation referred to as theoretical saturation. The idea that underpins theoretical saturation is that when you conduct and analyze more interviews, observations, or texts, no new “properties, dimensions, conditions, actions/interactions, or consequences” (Strauss & Corbin, 1998, p. 136) related to your research problem or questions emerge. This does not mean that all participants in your research are saying or doing the same thing. Rather, it means that what they are saying or doing are more examples of the same properties, dimensions, conditions, actions or interactions, or consequences that you have identified from the iterative analysis of the previous data you have collected. Any conclusions about saturation can only be reached after a period of immersion in, and careful thinking about, the qualitative data that you have collected. This includes making sure that you have actively looked for data that does not support the properties, dimensions, conditions, actions or interactions, or consequences that your analysis and interpretation of that data has developed. In this way, you attempt “to understand what . . . [you] have obtained” (Liamputtong, 2009, p. 277) by getting a “sense of the whole” (Patton, 2002, p. 440, bold in original changed to plain text). Connecting Analytical Considerations to Decisions About Sample Size Keeping this discussion about saturation in mind gives us a clue to as how we might answer a question that inevitably arises when we are designing qualitative research. This is the question of how many people we will interview, or how many focus groups we will need to conduct before reaching theoretical saturation. In other words, how many interviews are sufficient to base our analysis on in order for the interpretations arising from that analysis to be credible? Or putting it very simply, how do we know that we have collected and analyzed “enough” data? How do we know when we are “done”? The same sort of question applies to how many observations we will make, or how many texts we will analyze. These types of questions are questions about what the size of our sample will be. At this point, you may be hoping that we will now provide you with the number of interviews, observations, and texts that are sufficient to enable “enough” rich enough data to ensure the trustworthiness of a qualitative study. Or that we can provide you with some sort of mathematical calculation that you can do to tell you this. However, we are not able to give you a number or a calculation that you can do to obtain that number. This is because the inductive and iterative nature of the thinking that underpins a qualitative research design allows for aspects of that design to emerge as the research progresses; for example, to follow up inconsistencies in the data collected that require further elucidation, or to add questions to an interview guide as a result of (possibly unanticipated) responses that you have received and now need to know more about. Such an emergent design mitigates against the idea of determining or calculating precisely how many interviews are “enough” to include in your research design at the outset of the research. Considerations of how much and what type of data to collect do not stand alone from the iterative analytical process of qualitative data. It is this analytic process that guides how much data you will need or whether you will still need to collect more in order for your interpretations of that data to be credible, and your analytic strategy to be
Chapter 7 • Analyzing and Interpreting Qualitative Data   173 trustworthy. This is why while “[an] initial approximation of sample size is necessary for planning, . . . the adequacy of the final sample size must be evaluated continuously during the research process” (Malterud et al., 2016, p. 1759). Activity Navigating the Requirement to Give a Number When It Is Not Possible to Do So Research funding bodies, dissertation committees, and ethics committees often require a number, or at least an indicative number, to be given in advance for the number of interviews to be conducted, or observations to be made, when they are evaluating the credibility and feasibility of your research design (see Cheek 2000a, 2018b). Researchers who are new to qualitative inquiry often have problems with this question. They worry that if they give a number, will they have “enough” data or a “big enough” sample? What happens if they want to change the number? Given all this uncertainty, how can this requirement from these types of committees be navigated? Baker and Edwards developed a National Centre for Research Methods (NCRM) Review paper (2012) to address these types of questions. They collected and reviewed responses from renowned social scientists and early career researchers whom they asked how many participants is enough in a qualitative study. The answer they received from most of them was “it depends” (Baker & Edwards, 2012, p. 2). Moreover, In considering what “it depends upon” however, the responses offer guidance on the epistemological, methodological and practical issues to take into account when conducting research projects. This includes advice about assessing research aims and objectives, validity within epistemic communities and available time and resources. (p. 2) Baker and Edwards go on to provide an excellent overview and summary of the 19 responses in their introduction to the Review paper. We recommend that you take a close look at both this introduction and the actual responses themselves. Think carefully about them, and then take up a position that you are able to justify about the way you are going to answer the question of how many [insert the method(s) you are using, e.g., interviews] are enough when developing your research design. Principles to Guide Sample Size Considerations in Your Qualitative Research Design There are some principles that can guide us when thinking about sample size when designing qualitative research. A key one is that “[t]he validity, meaningfulness, and insights generated from qualitative inquiry have more to do with the information richness of the cases selected and the observational/analytical capabilities of the researcher than with sample size” (Patton, 2002, p. 245). This means that while sample size may be numerically smaller in qualitative inquiry than in quantitative inquiry, it may not be smaller in terms of the amount of information that is obtained from that smaller in size sample.14
174  Research Design Just including more participants, or making more observations, in your qualitative study does not necessarily make that study more credible or more trustworthy. Indeed, the opposite can be true. If a sample is too large, then it will not permit “the deep, case-oriented analysis that is the raison d’ être of qualitative inquiry” (Boddy, 2016, p. 429, drawing on Sandelowski, 1995). A numerically smaller sample in a qualitative study can provide much more in-depth and rich data about each information-rich person or site in the study than a lot of thin and shallow description or analysis or interpretation of each person or site in a numerically larger sample. Trying to determine an exact number, or insisting that one is given, at the outset of the research, or working explicitly or implicitly on the assumption that having more participants or more sites in your study necessarily means “better” or “more in-depth” data, is to apply positivist assumptions to qualitative research. This violates the methodological understandings underpinning the vast majority of approaches in qualitative research, which are non-positivistic in orientation. Therefore, it is not the number or length of interviews or observations that matter when making decisions about when you are “done” with collecting your data. Rather, it is how information rich the participants and sites in your sample are, and how much of that information you were able to glean from those participants and those sites. The more information rich each person or site in your sample is, and the more information rich each interview, observation, or text that you base your analysis on is, then the lower the number of sites, participants, and interviews, observations, or texts that are likely to be needed to reach, for example, some form of saturation. Therefore, questions about when and how you know if you have collected and analyzed “enough” qualitative data can be answered by asking yourself other types of questions that take the emphasis off the quantity of data or amount of analysis that you have done and put it on what you are able to use that data or analysis for. For example, do you have enough rich and in-depth information to drive your analytical strategy? Is the “information power” (Malterud et al., 2016, p. 1753) of your purposeful sample and the data collected from that sample sufficient to enable you to make and justify credible interpretations of the data you have collected and analyzed? You can only know the answer to these questions when, after a long period of reflexively working with the analysis of the data you have collected, you decide that you are able to produce credible and trustworthy interpretations of what it is that your research is focused on and that can address your research problem or questions. This is part of ensuring the credibility of your research design and the conclusions you reach from the qualitative research that you conduct using that design. Remember, “[a] systematic and careful description of the whole process of interaction with the reality under study is what indicates good quality when using a qualitative method” (Stenbacka, 2001, p. 555). It also assists you to know when enough is enough. TIP AN EXCEPTION We have argued above that, in qualitative research, it is the amount and richness of relevant information gained from each person or site in your sample that is key, not the sample size per se. One exception to this is when qualitative data is collected within a post-positivist inquiry paradigm. For example, if the aim of collecting the qualitative
Chapter 7 • Analyzing and Interpreting Qualitative Data   175 data is to inform the development of items for a questionnaire for a quantitative survey study, then the sample of people or sites from which that qualitative data is obtained will need to reflect that aim. Therefore, it would be appropriate to obtain qualitative data from a representative sample of the intended population from which the survey participants will be drawn in order to be able to prioritize what items or questions will be included in the questionnaire. In this type of study design, the researcher applies post-positivist derived thinking to “a qualitative element of research to set the parameters for a further positivist quantification. . . . [U]nder this approach, a criticism of sample size because of smallness may well be justified” (Boddy, 2016, p. 429). This is because the principle of representativeness underpins the generalization of the finding from analyzing sample data to the larger population in question (see Chapter 8 of this book). Therefore, when qualitative research is being undertaken under a positivist approach . . . for example with a view to developing a quantitative measurement instrument such as a questionnaire . . . [it] would necessitate sampling a greater number of respondents, and, in general, at least one representative of each segment of the population under consideration in the wider research should be sampled in the qualitative research (Boddy, 2016, p. 430). CONCLUSIONS Designing qualitative research involves thinking about, and making choices related to, how you will obtain the type of qualitative data you need to be able to address your research problem or questions. It also involves thinking about what you will do with that data when you have collected it. How will you analyze that data, and why? To do so will require you to reflexively think about, and work through, questions such as these: • How will you organize the qualitative data you collect? • What analytic strategy will you choose for working with, analyzing, and interpreting that qualitative data? • What form will aspects of that analytic strategy take? • How will you convince readers of your research of the credibility of your analysis and interpretation of that data, and therefore the conclusions you reach about the questions your research is being designed to address? These are crucial questions to think through, and make decisions about, when designing qualitative research. This is because “[t]he strengths of qualitative data rest centrally on the competence with which their analysis is carried out” (Miles et al., 2014, p. 12). To answer each of the above questions you will need to add another question: Why did you answer each of those questions in the way that you did? For example, why will you organize your data in that way? Why will your analytic strategy take that form and not another? Justifying the way that you analyze the qualitative data you collect is an important part of establishing the trustworthiness and credibility of your qualitative research design, as well as the trustworthiness and credibility of the conclusions you reach as a result of putting that design into practice. This includes demonstrating that the way you choose to organize the
176  Research Design qualitative data you collect, and the form that your analytic strategy takes, is congruent with the theoretical and methodological assumptions underpinning your research. For example, when designing your research, you will need to think about, and make choices about, whether coding will play a part in the way you will analyze and interpret your qualitative data. Why or why not? If you decide that the strategy of using coding will be a part of your overall analytical strategy, then you will need to decide and justify how you will put your coding strategy into practice. What type of coding will you choose to use and why? A set of transcribed interviews, a description of them, or a set of codes and categories or themes derived from a coding strategy employed on those interviews, or a description of each of those categories or themes, in themselves are not the findings of your research. This is because while such description is part of the analysis and interpretation of your data, on its own it is not enough to make sense of the data. Even if participants in your research have provided you with in-depth and rich information, unless you think through, analyze, and interpret that information carefully, its richness and depth will be lost. Thin and shallow analysis, predominantly made up of lengthy verbatim quotes of what was said, or descriptions of what was done, with little, if any, analytical or interpretive depth, will be the result. What are the findings of your research is the intellectual work, or art of interpretation, related to those codes, categories, or themes. Therefore, throughout the discussion in this chapter, we have emphasized the interconnections between data collection, analysis, and interpretation. This interconnectedness is crucial because “inquiry conclusions, interpretations, or warranted assertions arise from the mind of the inquirer, not directly from the output of a statistical or thematic analysis” (Greene, 2007, p. 142). Regardless of what form your analytic strategy takes, it is important to keep in mind an important point that many scholars of qualitative research have highlighted when discussing the analysis of the qualitative data. This is that the only way to really understand how analysis works is by doing it (Freeman, 2017). A textbook can provide you with guiding (not prescribing) principles for putting a qualitative analytical strategy into practice. However, at some point you are going to have to take the plunge and actually put the analytic strategy that you have developed into practice. “[R]eading theory—just like reading cookbooks or studying sheet music—will only get you so far. . . . [A]t some point you need to stop reading and start practicing” (Tracy, 2020, pp. 220–211). The discussion in this chapter has highlighted what you will need to think about when taking that plunge. In the next chapter, we continue our discussion of how to put the methodological thinking associated with, and underpinning, methods into practice when designing research. Our focus shifts to the series of decisions you will need to make related to analyzing data when designing research that employs quantitative research approaches. SUMMARY OF KEY POINTS • Analyzing qualitative data is an iterative and reflexive strategy. • Design considerations related to analyzing and interpreting qualitative data cannot be reduced to, or captured by, a fixed linear series of prescribed steps or phases.
Chapter 7 • Analyzing and Interpreting Qualitative Data   177 • Therefore, ways of analyzing and interpreting qualitative data are strategies, not procedures. • When using qualitative research approaches, data collection and data analysis happen simultaneously. • Therefore, analysis “begins” as you are in the process of collecting your first qualitative data. • There is variation in the overall analytic strategies employed in qualitative inquiry, both within specific qualitative approaches and between different qualitative approaches. • All aspects of the analytical strategy you choose in your research design must be congruent with the methodological and theoretical traditions that the design is premised on. • For example, the decision of whether to include coding in your analytical strategy, and if so, how you put the idea of coding into practice, must be congruent with the purpose of your research and the methodological and theoretical traditions of the qualitative approach you are using. • Analysis can only end after a period of immersion in, and careful thinking about and interpretation of, the qualitative data that you have collected. This includes making sure that you have actively looked for data that does not support the properties, dimensions, conditions, actions or interactions, or consequences that your analysis and interpretation of that data has developed. • Developing a comprehensive interpretation requires thinking about how the ongoing analysis and interpretations of the data that you have collected come together into a higher level of abstraction. • This reflexive work comprises the art of interpretation. • It is not the number of interviews/observations or amount of data that matter when determining sample size and when you are “done” with collecting your data. What does matter is how information rich the participants and/or sites in your study are, and how rich the information is that you obtained from them • You are done collecting your data when after a long period of working with your data you decide that you are able to produce credible and trustworthy interpretations of what it is that will enable you to be able to address your research questions. • Becoming a good analyst of qualitative data takes practice and a lot of hard work. KEY RESEARCH-RELATED TERMS INTRODUCED IN THIS CHAPTER analysis of qualitative data audit trail category codebook coding computer-assisted qualitative data analysis software (CAQDAS) data condensation grounded theory memos
178  Research Design rigor in qualitative research sample size transcription triangulation trustworthiness SUPPLEMENTAL ACTIVITIES 1. Many people make assumptions about how research should be done, including how data collected in that research should be analyzed. Often this is based on the different methodological and disciplinary traditions that they are part of. To illustrate this point, try the following activity: a. b. c. 2. Find a fellow student or colleague who is studying or has studied qualitative research. Explain to them the type of analytical strategy you are intending to use in your study and why. Take note of the questions that they ask you and reflect on why they may have asked them and what implications this might have for the way you design your research. Find a student or colleague who is studying or has studied quantitative research. Explain to them the type of analytical strategy you are intending to use in your study and why. Take note of the questions that they ask you and reflect on why they may have asked them and what implications this might have for the way you design your research. Compare the type of questions asked in (a) and (b) above. If the questions differed, work out why this might be so. Read the article by Bailey et al. (Bailey, K. A., Dagenais, M., & Gammage, K. L. (2021). Is a picture worth a thousand words? Using photo-elicitation to study body image in middle-to-older age women with and without multiple sclerosis. Qualitative Health Research, 31(8), 1542–1554. https://doi.org/10.1177/10497323 211014830.). This provides an excellent example of putting an iterative qualitative analytical strategy into practice. a. b. c. While reading it, write memos about what might convince you of the credibility and trustworthiness of the research being reported. Share your memos with another person who has done the same exercise. When doing so, discuss similarities in the memos you wrote as well as differences. Think about how you might be able to use these memos and this discussion when designing the analytical strategies in your own research design in order to convince readers of the trustworthiness of what you are reporting. FURTHER READINGS Saldaña, J. (2021) The coding manual for qualitative researchers (4th ed.). SAGE. Tracy, S. J. (2020) Qualitative research methods: Collecting evidence, crafting analysis, communicating impact (2nd ed., Chap. 8–11). John Wiley. Yin, R. K. (2016) Qualitative research from start to finish (2nd ed.). Guilford Press.
Chapter 7 • Analyzing and Interpreting Qualitative Data   179 NOTES 1. We are using the term generally. There are more specific uses of the term, such as constant comparison in grounded theory. If you are designing a grounded theory study, you will need to draw on the specific understanding of constant comparison associated with a grounded theory research design. 2. See Chapter 6 for a discussion of the use of probes in interviews. 3. See Chapter 2, section Putting Confidentiality and Anonymity Into Practice. 4. Quoted text is in bold in Patton (2002). 5. See Chapter 6 for an extended discussion of the idea of lines of inquiry. 6. For accessible discussions of the uses of computer software in qualitative studies, see Miles et al. (2014, pp. 46–50). Yin (2016, pp. 184–213) also has a good discussion of what you will need to think about when using computer software as part of your analytic strategy in qualitative research, as does Hesse-Biber (2017, pp. 329–333) and Merriam and Tisdell (2016, pp. 221–226). 7. See Yin (2016), pages 199–201, where he discusses “[d]isassembling data without coding them” (p. 199). 8. You will also see this initial stage of coding referred to as open coding or Level 1 and 2 coding (see Yin, 2016). 9. The full reference for the study is Tracy, S. J., & Rivera, K. D. (2010). Endorsing equity and applauding stay-at-home moms: How male voices on work-life reveal aversive sexism and flickers of transformation. Management Communication Quarterly, 24(1), 3–43. doi: 10.1177/0893318909352248 10. See Saldaña (2021, p. 72) for a discussion of these types of coding, including where you might go to read further about these types of coding and the memos arising from them. 11. One of the original tenets of grounded theory (see Glaser & Strauss, 1967) was that researchers should delay reviewing the literature until after they had collected and analyzed their data. Glaser has later maintained this view: Grounded theory’s very strong dicta are a) do not do a literature review in the substantive area and related areas where the research is to be done, and b) when the grounded theory is nearly completed during sorting and writing up, then the literature search in the substantive area can be accomplished and woven into the theory as more data for constant comparison. (Glaser, 1998, p. 67) This was highly controversial and not all grounded theorists agreed with this. For example, Strauss and Corbin (1998) argue that the literature can be used to help formulate questions acting “as a stepping off point” (p. 51) during interviews and can help understand “[w]hat is going on” (p. 51) during data analysis. For a thorough discussion of the literature review debate among grounded theorists, see Giles et al. (2013). 12. You will recall from Chapter 4 that in a constructivist inquiry paradigm, positivist derived criteria of validity “are replaced by such terms as trustworthiness and authenticity” (Denzin & Lincoln, 2018c, p. 98). The trustworthiness and authenticity of the reconstructed accounts of aspects of the social world that form the findings is what makes the research, and the conclusions reached from research drawing on the set of basic beliefs underpinning a constructivist inquiry paradigm, credible. 13. See Chapter 5 of this book. 14. Even in quantitative approaches such as quantitative surveys (see Chapters 8 and 9 of this book), which are driven by probability-based considerations, the sampling strategy and the overall conduct of the survey assessed in light of the goal of the study is just as important as sample size. Increasing the sample size will in most cases strengthen the validity of the findings of a quantitative study, but a large sample size is in itself not sufficient to ensure that the findings are considered valid.
   
8 FOUNDATIONAL DESIGN ISSUES WHEN USING QUANTITATIVE METHODS PURPOSES AND GOALS OF THE CHAPTER This chapter is the first of two chapters that focus on the series of research design decisions you will need to make when putting quantitative research approaches into practice. In this chapter, our focus is the decisions you will need to make before you will be able to begin collecting numerical data when using some form of quantitative approach as part of a research design. Answering your research questions will involve you reaching some sort of credible conclusions about those questions based on the analysis of the numerical data you have collected. In quantitative research, statistical procedures are used to undertake that analysis and base judgments about the credibility of those conclusions on. However, statistical procedures differ from one another in terms of what sort of conclusions they enable us to make. Some statistical analyses enable us to describe the frequency of something we want to know more about related to a group of people that is of interest—for example, how many people in that group eat fast food more than twice a week. Other types of statistical analyses enable us to draw conclusions about the relationship between one or more of the things we are interested in about the group of people—for example, is there a link between age of people in that group and the frequency of the consumption of fast food? Therefore, when designing your research, you will need to incorporate into that design a type of statistical procedure(s) that will enable you to draw the type of conclusions you will need to make to answer your research questions. Just as you need to choose a procedure for statistically analyzing the research data that will enable the sort of conclusion that can provide a credible answer to your research question, you will also need to make sure, before you collect any data, that your design includes a plan for collecting enough of the “right” type of numerical data to base that statistical analysis on. Otherwise, that procedure for statistically analyzing the research data will not enable you to arrive at credible research findings. However, even having enough data of the right type is not necessarily enough for you to be able to arrive at credible research findings by using that data. This is because who will provide the numerical data, that is, who will be the participants1 in your study, will enable or impede reaching such credible research findings. Therefore, there needs to be a clear and explicit link in your quantitative research design between the numerical data you propose to collect (type and amount), the type of statistical analyses that you propose to do, and the research questions that you are asking. Making this link explicit lies at the heart of establishing a statistically reasonable quantitative research design. Consequently, that link will be part of establishing the credibility of your research findings. In addition, there needs to be an equally clear and explicit link in your quantitative research design between the group of people that you want the answers to your research 181
182  Research Design questions to apply to, and the (often smaller) group of people from which the numerical data will be collected. How to make these clear and explicit links, and what effect that will have on the way you design your quantitative research, is what this chapter is all about. This sets the stage for the next chapter, which is about the decisions you will have to make in relation to actually collecting this numerical data from that group of people. Note: Discussing in-depth details from the field of statistics is outside the scope of this book. Therefore, you will not find detailed discussions of any specific procedure for statistically analyzing research data in this chapter. What you will find are discussions about the thinking needed to be able to incorporate and use statistics well in your research design. You will also find tip boxes containing advice about statistical concepts you may need to learn more about, as well as providing tips about when it may be wise to seek help from a statistician when developing your design. The goals of the chapter are to • Establish that although statistics are part of any research using a quantitative approach, statistics are not what quantitative approaches are. Quantitative approaches are about much more than the statistics involved in them. • Highlight that using statistics well in a quantitatively driven research design requires statistical procedures and probability-based principles to be incorporated into that design, rather than seen as stand-alone parts of it. • Provide you with enough knowledge related to data analysis, and selecting participants, to be able to use statistics and probability-based principles well in your quantitatively driven research design. • Demonstrate that to be able to incorporate statistical procedures and probability-based principles into a research design, you need adequate, but not necessarily exhaustive, knowledge about the logics and requirements of such procedures and principles. • Highlight that there needs to be a clear and explicit link in your quantitative research design between the group of people to whom you want the answers to your research questions to apply, and the (often smaller) group of people from which the numerical data will be collected. • Emphasize that there also needs to be a clear and explicit link in your quantitative research design between the numerical data you propose to collect (type and amount), the type of statistical analyses that you propose to perform on those data, and the research questions that you are asking. • Provide direction and advice about what statistical concepts you will need to know more about when designing quantitative research. WHAT YOU NEED TO THINK ABOUT IN ORDER TO DESIGN CREDIBLE QUANTITATIVE RESEARCH The departure point for the discussion in this chapter is that you already have a clearly defined research area, problem, or question and you have thought about what type of information you will need to address the problem that your research is being designed to
Chapter 8 • Foundational Design Issues When Using Quantitative Methods   183 address. Your thinking has led you to a point where you have decided that your research question or problem is about identifying, and quantitatively describing tendencies in, and characteristics of, a given (large) group of people in relation to their behaviors, preferences, and experiences. This has led you to conclude that the type of information needed for you to be able to do this can be provided by a quantitatively driven research approach.2 The purpose of all research is to arrive at credible answers to research questions by way of applying an accepted method for collecting and analyzing research data. In quantitative research, such accepted methods include the use of probability-based procedures to decide who will provide the data for the study as well as procedures for statistically analyzing that data. Moreover, these accepted methods include using some sort of device or tool to enable the collection of relevant and credible data for the study, as well as accepted ways of putting that device or tool into practice to actually collect that data. In other words, arriving at credible research findings using some form of quantitative approach requires using statistical and probability-based procedures in such a way that they enable answers to the research questions that are statistically reasonable. By statistically reasonable, we mean that the statistical procedures incorporated into that design are used in a way that is in keeping with the laws of statistics. When the research design is statistically reasonable, it can be justified and defended (within the rules of statistics). A statistically justifiable and defensible research design is necessary for the research to be credible. However, arriving at credible research findings using some form of quantitative approach also requires designing a way of obtaining (numerical) data that are relevant to the issue in question, as well as a way of collecting those relevant data such that the data obtained represent credible pieces of relevant information about the issue in question. By credible, we mean that the data represent pieces of information that you have reason to believe correspond well to the real-life behaviors, preferences, and experiences of the people from whom those data have been collected TIP WHY STATISTICS ARE A MEANS, NOT AN END IN THEMSELVES WHEN DESIGNING YOUR RESEARCH The thinking you need to do in order to design quantitative research that is statistically reasonable, includes thinking about statistical procedures at two levels: 1. the role of procedures derived from laws of statistics and probability when aiming to learn something about the group of people that is the subject of the research, and 2. how different types of research questions require different types of statistical procedures and data to answer them. The way we use the term statistically reasonable in this chapter is close to what textbooks about statistics and quantitative research methods would call statistical validity. Conclusions drawn from a procedure for statistically analyzing the research data are statistically valid if the analysis procedure is appropriate for drawing that specific type of conclusion, and if the analysis is performed on a set of data that meets the requirements of that analysis procedure. The idea of statistical reasonableness highlights that when designing quantitative research, a researcher is not primarily interested in statistics per se. Rather they are interested in what kind of answers to their research question(s) these statistical procedures may enable. Consequently, in a quantitative research design, the statistics are a means, not an end in themselves.
184  Research Design Key Questions to Ask Yourself When Designing Quantitative Research In the discussion so far in this chapter, we have highlighted that when designing quantitative research, what you need to think through, and then find out more about, before you can develop and put any quantitative approach into action includes asking yourself questions such as • What exactly do I want to be able say something about, deduce, confirm, or demonstrate in relation to my research questions? • To whom do I want the answers to my research questions to apply? • With this in mind, what type of statistical procedures will I need to include in my research design to enable me to say or demonstrate this (i.e., answer my research questions) on the basis of the analysis of my numerical data? • What type of numerical data will I need to enable, and ensure, the credibility of those specific statistical procedures? • Once I know this, how will I actually go about collecting that type of numerical data? For example, who will I collect the numerical data from? How will I decide that? • What type of measurement instrument will I use to do so? Your research design arises from the decisions you systematically make about each of the questions above. Why You Need to Ask Yourself All These Key Questions Simultaneously Note that none of the above key questions can be thought about in isolation. If your research questions are not well enough developed, then you will not be able to make decisions about any of the other questions. If you don’t know what you need the analyses of your data to be able to show you in order to be able to answer your research questions, then it will be impossible for you to work out what procedures for statistically analyzing the research data are reasonable to use in your study design. If you don’t know what you want your numerical data for (i.e., enabling conducting the procedures for statistically analyzing the research data that will underpin the answers to your research questions arising from the analyses of your data), then it will be impossible to work out what form the numerical data you collect will need to take. If you don’t know to whom you want the answers to your research questions to apply, it will be impossible for you to work out who to collect the data from. However, even if you do work out what are appropriate statistical procedures to incorporate into your research design, in itself that is not enough to ensure the credibility of your research. This is because no statistical analytical procedure, no matter how well it is done, can make up for shortcomings in the credibility of the data on which such statistical analyses are performed and conclusions are based. If the numerical data are not credible, then neither is any analysis based on that data. Therefore, the credibility of your quantitative research design and findings lies as much in the way that the numerical data has been collected and the form that that data takes, as it does in the statistical procedures used to analyze that data and as it does in the
Chapter 8 • Foundational Design Issues When Using Quantitative Methods   185 probability-based principles used to decide who will provide those data. This is why you need to ask yourself all these key questions simultaneously. The discussion in this chapter and the following one will unpack the implications that asking yourself all these key questions simultaneously has when you are designing your research. In the next section we focus on how procedures derived from laws of statistics and probability can help us learn something we want to know about the group of people that is the subject of the research. TIP HOW TO NAVIGATE OUR DISCUSSION ABOUT DESIGNING CREDIBLE QUANTITATIVE RESEARCH IN CHAPTERS 8 AND 9 In this chapter (Chapter 8), the spotlight is on what you need to think, and make decisions, about to develop a quantitative research design that is statistically reasonable. We will begin by looking at what you will need to think about when making decisions about selecting your study population, or a subsample of it. We demonstrate that your decisions about who makes up that population, or sample, must be statistically reasonable for your research design to enable credible answers to your research questions. We then move on to look at differences in the types of statistical procedures you might choose to use to analyze the data collected from that population or sample. Will the procedures you choose to incorporate into your design enable answers to your research questions that are statistically reasonable? Finally, we look at different types of numerical data and what type of data you will need to collect in order to be able to perform the statistical procedures required for answering your research questions. We point out that simply having numerical data (even lots of it) is not enough to ensure that your research is statistically reasonable. The discussion in this chapter about what makes a research design statistically reasonable sets the stage for the next chapter (Chapter 9). In that chapter, we shift the focus of the discussion to a more operational level: When doing so we take a closer look at the decisions you will need to make in relation to designing a way to go about collecting relevant and credible numerical data. WHERE TO BEGIN? DECIDING WHO YOU WILL COLLECT NUMERICAL DATA FROM AND WHY Somewhere in the process of designing quantitative research to enable you to answer your research question(s) you must make it very clear who or what comprises the group that the answers to those research questions will apply to and why. This group is called the study population. In other words, the study population for your research design is the group that you want to be able to say something at the end of your research. Therefore, they are the group of people or objects you will need to obtain data from or about. In social sciences, this study population is generally a (large) group of people, where every member of that group meets a set of inclusion criteria relevant to the research question(s), and where no one meeting those inclusion criteria is excluded from that group. For example, a researcher in education may want to address the question, Does class group size impact the learning outcomes in mathematics among first-year university students in California? In this case, the study population meeting your three inclusion criteria would be (1) all first-year students in California, (2) who are enrolled in a study program, and (3) that includes classes in mathematics.
186  Research Design TIP OUR USE OF “WHO” AND “PEOPLE” WHEN WRITING ABOUT STUDY POPULATIONS When writing about study populations generally, we will refer to who comprises that population, or the people in that population. However, study populations do not always include people. For example, you might want to know the density of algae (measured by the number of cells of algae 3 per cubic meter of water) in Lake Michigan, or you might want to know about objects such as surveillance cameras—how many there are and where they are placed in banks in the United States. In these cases, your study population is made up of algae in Lake Michigan and surveillance cameras in banks in the United States. We talk about “who” to avoid having to write who/what each time and also people/ objects/things and so on each time we mention study populations. If you choose to collect data from every member of the study population, the answers to your research questions arising from analyzing those data will of course apply to the study population as a whole. However, it is not always possible to include every member of the study population in your research study, for example, if you are a single researcher with limited resources (time and money) or the study population is very large. In this case, your research design will include a strategy for selecting a subset, or sample, drawn from the study population. The study sample comprises the members of the study population who actually provide the information that comprises the data of the study. You can then use the information gained about a characteristic of interest from that sample to make an estimate for the entire study population about that characteristic. In the box below we provide an example of how this works. PUTTING IT INTO PRACTICE HOW ESTIMATION PROVIDES THE LINK BETWEEN SAMPLE DATA AND POPULATION CHARACTERISTICS Suppose you want to know how many people per square kilometer in Canada have red hair. You have two options for finding this out: 1. Count the number of people in Canada that do have red hair (we call that number x), and then divide by the total number of square kilometers making up Canada (we call that number y). The accurate number of red-haired people per square kilometer in Canada is then ​z = ​x ⁄ y​​. 2. Count the number of people in selected square kilometers of Canada with red hair (we call that number ^​x​), and calculate the total number of square kilometers in that sample (we call that number ^​y​). Then ˆ ​z ​= ˆ​​x ⁄​ˆy​ ​​is your estimate—or best guess— for the unknown number z. In other words, if you are unable to find the accurate numerical value of a population characteristic (how many people per square kilometer in Canada have red hair), you can compute the numerical value of the corresponding characteristic of a sample drawn from that population (number of people in selected square kilometers of Canada with red hair). That computed numerical value of the sample characteristic is then the estimate for the numerical value of the corresponding population characteristic.
Chapter 8 • Foundational Design Issues When Using Quantitative Methods   187 What You Will Need to Think About When Using a Sample in Your Research Design Whenever a sample is used as part of research using some form of quantitative approach, keep in mind that the sample is not of interest in itself. Rather, the sample is “a tool to find out about the population” (Henry, 1990, p. 11). Therefore, the key issue to think about when using a sample is that the sample must enable learning something about the study population. Given that the purpose of the research is to learn something about the study population—not just about the sample—you need to make sure that the sample of the study enables you to make statistically reasonable claims about the study population. The degree to which your findings about the sample hold for other persons in your study population is called the external validity of those findings. Arriving at conclusions about a study population based on data from a sample drawn from that population is called generalizing in quantitative research.4 Such generalization is enabled by establishing a relationship between what is going on in a sample and what is going on in the population from which that sample is drawn. The nature and strength of that relationship (i.e., how confident you can be in the claims you make about the study population by way of generalizing what you have found to be going on in the sample) is decided by laws of probability. Therefore, probability-based principles are what inform your decisions about the size of the sample, as well as selecting which members of the study population to include in the sample. These decisions make up your sampling strategy. The key characteristic of a sample that enables statistically reasonable generalizations of the findings about the sample to the study population from which the sample is drawn is that the sample adequately mirrors, or represents, the study population. A sample that represents the study population well (often referred to as a representative sample) replicates the population according to specified criteria, such as the distribution of age, gender, and level of education in the population, as well as other criteria relevant to the research question(s) the study is designed to address. Because of this, representative samples can enable answers to your research question which are very close to those that would have been obtained had you not used a sample at all. TIP STOP AND THINK BEFORE YOU RELY ON INFORMATION SEEMINGLY BASED ON STATISTICS The need for thinking about sampling is not limited to the situation of designing research—it is equally important when you read the work of someone else, and also in your everyday life. For example, there is no doubt you have at some point come across online polls. For example, in an online newspaper reporting about an incident involving the local police, readers may be asked to respond to the polling item: Select the option that best represents your opinion about the statement “My local police treat members of the general public with respect.” 1 2 3 4 5 6 Strongly disagree Moderately disagree Mildly disagree Mildly agree Moderately agree Strongly agree (Continued)
188  Research Design (Continued) As readers tick the box that best represents their opinion about the statement, the result of the poll is constantly updated, for example as in the figure below. If the newspaper on the basis of such a polling result claims that “Close to 80% of readers feel their local police does not treat members of the general public with respect,” you should stop and think about whether that claim is statistically reasonable, or just what is commonly called clickbait. When thinking further about it, you will realize that you do not know if the sample of readers who have chosen to participate in the poll is representative of the group of readers of the newspaper (i.e., the total possible study population) as a whole. Consequently, you have no way of assessing whether the claim applies to a wider population of readers or if it only applies to the group of readers who have participated in the poll. Remember, the key issue to think about when using a sample is that the sample must enable learning something about the study population. In this case, the only thing that we can be sure of is that we have learned something about the sample, but not necessarily about the study population. So therefore, when reading any form of claim based on statistics, remember that it is not the statistics themselves that matter, rather whether the statistics can be used to make that claim. How Do I Design a Sampling Strategy That Enables a Representative Sample? A random selection process is a probability-based process in which every member of the study population has an equal probability of being selected. For example, when a sample comprising 20 people from the population of 200 female + 200 male employees of a specific company is selected by drawing the names of 20 employees out of a hat containing the names of all 400 employees, the selection process is random. When employing probability sampling, that is, when using a random process to select who will be in the sample, you can, purely by chance, end up with a sample comprising names of only male employees, for example, from the population of 400 employees. This would cause certain groups (males) to be overrepresented in the sample (i.e., only men are represented in the sample, yet the total number of employees comprise 50% male employees and 50% female employees). Such overrepresentation is the opposite of what you are trying to achieve, which is a representative sample.
Chapter 8 • Foundational Design Issues When Using Quantitative Methods   189 However, by increasing the number of random selections from the study population (i.e., increasing the sample size), the chances of, purely by chance, ending up with a sample in which certain groups of the study population are overrepresented, decreases.5 Therefore, for large enough samples, “probability sampling will ensure approximate representation of the population” (Nardi, 2018, p. 118). Therefore, large probability samples enable generalizing, in a way that is statistically reasonable. This makes it possible to generalize what you learn from statistically analyzing the data collected from a sample to the study population from which that sample is drawn. TIP WHAT MAKES A SAMPLE “LARGE ENOUGH” TO ENSURE APPROXIMATE REPRESENTATION OF THE POPULATION? Unfortunately, there is not a magic number we can give you. Deciding on sample size for a particular research study is usually rather complicated and depends on a number of interrelated considerations. These include • the number of variables 6 included in your study—the more variables included, the larger the sample needed to capture the variability in all of them; • the variability of the issue or phenomenon you are studying within the study population —a larger sample is needed to capture a large variation in a phenomenon;7 • the type of statistical analysis you need to perform on your data. In light of this, Gorard offers a piece of general advice when thinking about what makes your sample large enough. This is to “have as large a sample as possible” (Gorard, 2003, p. 60). Deciding on what is “possible” for you includes taking into account the resources and time you have available. Are There Other Sampling Strategies I Can Consider if Probability Sampling Is Not Feasible? Probability-based sampling is not always possible or feasible in practice. When this is the case, it will mean that you may have to work with the sample you can get, even if that sample is not a probability sample. For example, lack of resources (both time and money) or lack of access (for many possible reasons) to every member of the study population may force you to employ some form of non-probability sampling strategy. Non-probability sampling, while having the advantage of making it easier for you to obtain your sample, has the definite drawback of not ensuring the generalizability you are after. This is because “the selected cases can be systematically different from the others in the study population, and there is no means to adjust or estimate how similar or different these cases selected through nonprobability sampling may be” (Henry, 2009, p. 79). However, there is something you can do to adjust a non-probability sampling strategy to assist in avoiding the produced sample being “atypical” of the study population. This is to incorporate elements of probability sampling into a non-probability sampling strategy. Vehovar et al. (2016) suggest two ways of doing this: 1. employing a combination of quota sampling8 and probability sampling; for example, (a) choose a specific geographical area in which to conduct your
190  Research Design research (the geographical area defines the subgroup of the study population from which sampling will take place), and (b) employing probability sampling within that geographical area, or 2. employing a combination of convenience sampling and probability sampling; for example, (a) ask people on the street to take a survey, and (b) employ a strategy of randomly selecting which street you do this in and/or which time of day you are at that street. Such incorporation of probability sampling into a non-probability sampling strategy, may very well have the potential to underpin claims about that population that are statistically reasonable (Vehovar et al., 2016). What is important is how you describe the “correspondence between the sample and the population” (Henry, 1990, p. 12) to the audience of your research. Or in other words, what is important is that the claims you make based on incorporating probability sampling into a non-probability sampling strategy are statistically reasonable. Rounding Off: Important Things to Keep in Mind if You Decide to Use a Sample in Your Research Design 1. Data collected from a sample can (but does not always) enable making statistically reasonable claims about the entire population from which that sample is drawn (i.e., enable the purpose of the research to be met). 2. A claim made about the study population based on sample data is never 100% certain. Using a sample to infer claims about the study population means basing those claims on probabilistic expectations of what is going on in the population rather than conclusive findings about that population. In other words, any claim you make about the study population based on what you learn about a sample (drawn from that population) is based on probability, not certainty. 3. Consequently, if you decide to use a sample in your research design, you will not be able to say that something is undoubtedly true, or an undisputable fact, about the population from which that sample is drawn. Rather, your claims will always take the form “it is highly likely that X (X being whatever it is that you have found out from the sample) is true within the population in question.” There is always a (small) chance that the claim you make about the population you set out to learn something about is faulty. 4. A claim about a study population that is not 100% certain may still be statistically reasonable. While you will never be able to find something that is undoubtedly true about your study population, this does not necessarily mean that the claims you make about the study population based on your findings from your sample are not statistically reasonable. Rather, whether or not those claims are statistically reasonable is related to who makes up that sample.
Chapter 8 • Foundational Design Issues When Using Quantitative Methods   191 PUTTING IT INTO PRACTICE DECISIONS ABOUT SAMPLING ARE NOT ALWAYS STRAIGHTFORWARD Example 1: Sometimes Non-Probability Sampling May Underpin Credible Claims About a Study Population As we have discussed in this section, large probability samples enable generalizing, in a statistically reasonable way, what you learn from statistically analyzing the data collected from a sample to the study population from which that sample is drawn. Therefore, it may be tempting to default to random sampling in a research design without giving it much thought. However, you will need to think carefully about who comprise your study sample to make sure that probability sampling is feasible for you. For example, if you are in the process of designing research to learn something about the inhabitants of a specific geographical area and you plan to use a sample, can you be sure that every individual living in that area has an equal chance of being selected for the sample? To ensure every individual living in that area has an equal chance of being selected for the sample requires that you have access to everyone in that population. This is because if there are individuals or groups of people that you for some reason do not have access to, their chances of being selected for the sample is zero. Therefore, there is not an equal chance for them to be selected in your sample. These are groups who live their lives in the geographical area you are interested in but, for example, may not have a permanent or official address there and therefore are unable to be contacted, such as people who are homeless. However, if you openly acknowledge that certain groups of people were inaccessible to you and discuss why this does not affect representativeness of your sample, you still may be able to claim external validity based on your sample. Consequently, the credibility of your research may be considered strong even though you have not employed strict probability sampling. For example, consider the research question “What is the average proportion of the total household income that is spent on electrical bills by houseowners in the city of Fairbanks, Alaska?” In this case, if you are unable to access groups of people who live in Fairbanks but are homeless, then this will not necessarily weaken the external validity of what you learn about the sample. Example 2: Why, When You Do Use Probability Sampling, You Still May Not Get a Representative Sample If you draw a sample comprising 100 students from a population of 500 data science students and 500 linguistics students, and you do not think carefully enough about how to select these 100 students, your sample may, purely by chance, end up comprising 95 linguistics students, simply because of the way you selected students for the sample. For example, you randomly recruited respondents on campus between 9 a.m. and 12 p.m. on Monday and Tuesday. However, when doing so you failed to take into account that no data science classes were held on those days during those times. This made it far less likely that you would recruit data science students for your study. Any pattern detected in the data collected from those 95 linguistics students + 5 data science students would therefore not necessarily reflect a pattern present in
192  Research Design the study population of 500 students from each study program. Rather, it could be a pattern arising from the disproportionate number of linguistics students in the study sample. This disproportionate number of linguistics students in the sample actually disqualifies what you learn about the sample from being generalized to the population of 500 data science students and 500 linguistics students. Consequently, it is who makes up your sample that either enables or impedes you to be able to make statistically reasonable claims about the study population based on what you learn from that sample. This is why you will need to think carefully about who will be in the sample in your study in order for you to be able to generalize, in a statistically reasonable way, what you learn from that sample to the study population. CHOOSING AN ANALYSIS PROCEDURE SUITABLE FOR ANSWERING YOUR RESEARCH QUESTION We saw in the last section that applying probability-based principles when making decisions about selecting your study population, or a subsample of it, is part of establishing that your research design is statistically reasonable. Another part of establishing that your research design is statistically reasonable relates to your choice of statistical procedures to analyze the data collected from that population or sample. This is because different types of research questions require different types of analysis procedures to answer them. A specific procedure for statistically analyzing your research data enables you to answer a specific research question only if that analysis procedure provides the type of information needed to answer that research question. For example, a procedure for statistically analyzing the research data designed to quantify a characteristic within one group of people in your study population (e.g., teenagers) will not provide the type of information needed to answer a research question about comparisons across different groups of people (all age groups) in that study population related to that characteristic. Therefore, it is the research question—what you need to know about and why—that guides your thinking when you are in the process of deciding what specific procedure for statistically analyzing the research data to use in your design. From our discussion in Chapter 3, you will remember that the variables related to your research questions are “something of interest that varies in relation to either the people or the places that you are studying.”9 Most often research question(s) will focus on at least one variable. For example, consider this research question: How does changing from instructor-centered teaching methods to more learnercentered teaching methods10 affect students’ learning outcomes? The variables addressed by this research question are “instructor-centered teaching methods,” “learner-centered teaching methods,” and “learning outcomes.” In addition, you may have identified variables such as “study program,” “subject matter,” “age,” and “level of education” as potentially relevant to answering your research question, even though that research question does not specifically include those variables. What type of procedure for statistically analyzing the research data will enable answering your research question depends on what type of issue the research question addresses in relation to one or more of these variables of the study.
Chapter 8 • Foundational Design Issues When Using Quantitative Methods   193 Research Questions About What Is Going On in a Study Population If your question is about what is going on in a study population related to your variables of interest, there are two groups of procedures for statistically analyzing the research data that will enable answering that question: 1. Descriptive analysis procedures 2. Correlational analysis procedures We discuss them separately in what follows. Descriptive Procedures Descriptive analysis procedures are used to investigate and then quantify the variables of the study in order to describe trends in, and characteristics of, the study population. They can also be used to compare such descriptions and trends across subgroups of a study population. For example, assume your area of interest is turnout at local elections and that your research question is Which factors affect turnout at local elections in constituency X? To answer this question, you can approach members of constituency X and ask them to tell you, for example, their age, level of education, how many children they have, and so on, as well as how likely it is that they will decide to vote at the next local election. “Age,” “level of education,” “number of children,” and “probability of deciding to vote at the next local election” are the variables of the research. From the data set you have obtained, you can calculate the average probability of deciding to vote in the next local election among the members of that local community. This is an example of quantifying a variable of the study in order to describe an aspect of the study population of interest (in this case quantifying the variable “probability of deciding to vote in the next local election”). You can also calculate the average probability of deciding to vote in the next local election across subgroups of the population (e.g., this could be age groups, groups defined by how many children each person has, groups defined by level of education, and so on). The computed average probability of deciding to vote in the next local election for each subgroup enables you to describe variation in expected turnout at the next local election across subgroups in the population comprising all members of the local community in question. You are likely to find that the average probability of deciding to vote in the next local election varies across such subgroups. However, you will still need to statistically assess that variation. In other words, you still need to decide whether the variation is due to underlying differences across the corresponding subgroups of the population in question, rather than being coincidental for the particular group of people you collected data from. To do this you will need to include as part of your research design a procedure for statistically analyzing the research data appropriate for comparing numerical values representing a population characteristic (in this case, the average probability of deciding to vote in the next local election) across subgroups of that population, and which will also enable you to decide whether any observed difference between those groups is likely to be coincidental or not. In the box below, we direct you to some of those procedures.
194  Research Design PUTTING IT INTO PRACTICE PUTTING COMPARING GROUPS INTO PRACTICE To compare the average value of a population characteristic between two groups and decide whether an observed difference between them is likely to be coincidental or not, you can apply a procedure for statistically analyzing the research data called the t-test. The t-test is one of many procedures for statistically analyzing data used for this purpose. For example, for large samples, the z-test may be appropriate. Other options for procedures for statistically analyzing data designed to compare something across groups include ANOVA, chi-square (χ2), Kruskal-Wallis H, and many more. Which one is most appropriate to use depends on the nature and amount of the data you have, as well as how many variables you set out to compare. You will need to find out more about how to do these procedures if you are thinking to use them in your study design. However, no matter how well you do these procedures, the most important thing, as we have discussed, is to know why you are using them. Correlational Procedures Rather than simply describe trends and characteristics related to the variables of interest in your population of interest, your research question(s) may require you to look for the existence of, and then interpret, relationships or correlations between those variables. In such cases, correlational analysis procedures will need to be part of the research design to enable you to establish the existence (or not) and nature of these correlations. For example, consider this research question: How does turnout at local elections relate to age in constituency X? Calculating the average probability of deciding to vote at the next local election across age groups in the population comprising members of constituency X, and then simply comparing that probability across age groups, will not reveal the nature and strength of a possible relationship between the variables “probability of deciding to vote at the next local election” and “age.” For example, if you find that the probability of deciding to vote at the next election is lowest among people 70+ years old, you still won’t know exactly what the relationship between the variables “probability of deciding to vote at the next local election” and “age” is. For example, is it the case that as age increases, the probability of deciding to vote in the next local election decreases (see the illustration to the left in Figure 8.1 below)? Or is it maybe the case that once people turn 70, the probability of deciding to vote in the next local election drops, but then stays the same as age increases (see the illustration to the right in Figure 8.1 below)? Therefore, you will need to use a procedure for statistically analyzing the research data designed to identify and describe the relationship between the variables in question, that is, some form of correlational approach. This will enable pattern detection, such as the pattern related to the link between variables “age” and “probability of deciding to vote in the next local election.” Such pattern detection will underpin claims about the population, such as the claim “as age increases, the less likely it is for members of this population to decide to vote at the next local election.”
Chapter 8 • Foundational Design Issues When Using Quantitative Methods   195 FIGURE 8.1 ■ Detecting Patterns Probability of deciding to vote in the next local election, increasing upward Probability of deciding to vote in the next local election, increasing upward Illustration to the left: As age increases, the probability of deciding to vote in the next local election decreases. Illustration to the right: For ages below 70 years old, the probability of deciding to vote in the next local election is constant across the age groups. However, once people turn 70, the probability of deciding to vote in the next local election drops, but then stays the same as age increases. 18-40 Age, increasing toward the right 41-69 70 Age, increasing toward the right To sum up, descriptive and correlational analysis procedures can be used to explore the characteristics of the study population, that is, detect and describe characteristics of the population. This would be the analytical strategy to use if the purpose of the research is to answer the question, “Which factors affect turnout at local elections?” To answer this question requires your study design to contain statistical procedures enabling exploration of a data set across several variables to identify and describe factors that, according to the data, affect the turnout, as well as any patterns related to links between those factors. Research Questions About Why Something Happens in a Study Population Some research questions require you to do more than describe trends and patterns about variables in a population. They require you to make statements about why those trends and patterns occur in that population. For example, consider this research question: What causes turnout at local elections to be less among people over 70 compared to other age groups? Detecting a pattern telling us that the expected turnout at the next local election in constituency X is less among people over 70 compared to other age groups does not indicate that turning 70 is, in itself, a reason for not voting at local elections. Rather, there could be a variety of reasons for the detected pattern. For example, do people over 70 feel that the issues raised by the candidates are not relevant to them? Or are they tired of politicians not sticking to their promises after the election? Or maybe the detected pattern has something to do with how the specific election procedure is organized, for example, where the older person has to physically turn up to vote and the difficulty of getting there if they no longer drive. Therefore, to reveal why the expected turnout at the next local election in constituency X is less among people over 70 compared to other age groups requires you to identify and then select specific factors that you have reason to believe causes the detected pattern to occur. Then you must design your research in such a way that the design enables detecting any effect on turnout at elections of each of those specific factors. The design must also include ways of making sure that the effects of those specific factors are not mixed up with
196  Research Design the effect of any other possible factor. You will see such designs being called experimental or quasi-experimental in textbooks about specific statistical procedures.11 TIP A REMINDER You will recall from Chapter 5 that in experimental or quasi-experimental research approaches, you develop a highly structured research design, making sure all variables under consideration are taken into account, while at the same time making sure that they are not affected by any other factors that might influence the outcome. You will also recall that we included an extended discussion about an example of this type of highly structured quantitative research design in Chapter 5, namely the Randomized Controlled Trial (RCT). Research designs enabling answering why questions are quite rigorous in that they must enable explicit descriptions of how variables relate to each other, as well as being transparent in terms of identifying possible effects of variables not included in the study. For example, assume you make a claim about why the reading skills development program currently used in schools in your country does not help students with dyslexia improve their reading fluency. When doing this, you are also implicitly suggesting that using a different program for dyslexic students or changing the factors that cause the existing program not to work will improve the reading fluency of the dyslexic students. This causes you to focus on variables related to this implicit assumption. In other words, you focus on specific variables related to an assumption you have about the relationship between a reading skills development program and reading fluency among dyslexic students. Moreover, you omit any other variable that could be related to issues in question, for example, contextual peer factors such as level of trust in peers in the classroom or the dyslexic students’ acceptance or rejection by classroom peers. So how can you make sure that what you claim about the link between a specific reading skills development program and reading fluency among dyslexic students is credible, given that a range of factors that possibly affect that link is not included in the study? That this question is not easy to answer tells you that it is quite difficult to make credible claims about why something is as it is when dealing with people in social settings. Credibility of research findings arising from an experimental or quasi-experimental design is as much a design issue as it is an issue of choosing an appropriate procedure for statistically analyzing the research data. This is because no procedure for statistically analyzing the research data can in itself establish cause–effect relationships between variables, that is, why something happens in a study population. To be credible, the results of these procedures must be able to be replicated and therefore corroborated12 by other researchers. Reproducing research results with the intention to confirm the findings of an original study can be very difficult if the original study is not transparent about its design, including which variables are included, which are left out and all the assumptions made about both sets of variables. Consequently, the level of rigor required in an experimental/quasi-experimental research design can only be achieved by developing a hypothesis about a proposed cause– effect relationship between the variables of interest, and then designing the study to test that hypothesis. In what follows, we explore how research questions that take the form of
Chapter 8 • Foundational Design Issues When Using Quantitative Methods   197 hypotheses affect both your choice of procedure(s) for analyzing your research data as well as what you will need data about. When the Research Question Takes the Form of a Hypothesis You will recall from Chapter 3 that a hypothesis is a statement about what we predict empirical evidence to reveal about a problem of interest in a specific situation in a specific context. It is a “hunch derived from an informed reading of the literature, a theory, or personal observations and experience . . . capable of being tested” (Nardi, 2018, p. 48). For example, consider this research question: Which factors affect turnout at local elections? Now suppose that what you want to do is to test the following hypothesis developed from this research question: For ages over 70, an increase in age corresponds to a decrease in probability for deciding to vote at the next local election. In this case, you will collect data specifically about the variables involved in the hypothesis (“age” and “probability of deciding to vote at the next local election”) only. This is because you are interested in these two specific variables only. Finally, you will apply a procedure for statistically analyzing those data designed to test the specific hypothesis in question. Testing the hypothesis means applying a procedure for statistically analyzing the research data designed to detect any inconsistency between the data collected about one or more variables and the hypothesis made about that (those) variable(s) if there is such an inconsistency to detect. As was the case for research questions in general, so it is the case that not all procedures for statistically testing hypotheses are designed to test all types of hypotheses. What is an appropriate procedure for statistically testing a hypothesis depends on what type of issue the hypothesis addresses in relation to one or more of the variables of the study. Aspects That Affect Whether or Not the Conclusions From Testing a Hypothesis Are Statistically Reasonable Several aspects of a research design, such as sample size and significance level, will affect what is known as the power of a hypothesis test. Simply put, the power of a hypothesis test is its ability to correctly claim support for the statement making up the hypothesis. Research designs that enable high in power hypothesis tests will improve the probability of the test detecting whatever the test is set up to look for in the data set, if there is something present to detect. Consequently, a research design that includes testing a hypothesis should enable a hypothesis test with as much power as possible. TIP WHAT YOU WILL NEED TO LEARN MORE ABOUT RELATED TO HYPOTHESIS TESTING There are a lot of statistical concepts involved in deciding whether or not to claim support for your hypothesis. These include • • p-value Type I error (Continued)
198  Research Design (Continued) • Type II error As well as • • what happens to Type II error as Type I error decreases, • what the connection between the significance level α, power, and sample size has to do with Type I and Type II errors. what happens to the power of a hypothesis test when the size of the sample is changed, To help you navigate these concepts, we advise you to seek expert statistical help. Further, it is important to keep in mind that not all procedures for statistically analyzing the research data are designed to test all types of hypotheses.13 Rather, it is the nature of the hypothesis that guides your thinking about which specific hypothesis testing procedure to apply when testing a specific hypothesis, for example, whether the hypothesis is about testing assumptions about relationships between variables or about comparing characteristics across subgroups. We strongly recommend you seek expert statistical help when designing your research to ensure that there is a match between the nature of the procedure for statistically testing hypothesis that you include in your design and the types of hypotheses you are designing your research to test. Still More Thinking to Do At this point, we have established that designing quantitative research that is statistically reasonable requires you to think about statistical procedures at two levels: 1. the role of procedures derived from laws of statistics and probability when aiming to learn something about the group of people that is the subject of the research 2. how different types of research questions require different types of statistical analysis procedures to answer them Your research question guides your thinking when deciding what procedure for statistically analyzing the research data is appropriate to use in your research design. Remember, no matter how competently individual statistical procedures may be conducted, if they are not the correct ones in terms of enabling you to reach the type of conclusions needed to answer your research questions, then they will be of little or no use to your research. However, even when you do find an appropriate procedure for statistically analyzing the research data in order to provide the kind of information needed to address your research question, you still have more thinking to do. This is thinking related to what type of data you will need to collect in order to be able to perform the statistical procedures you need to in order to reach the type of conclusions needed to answer your research questions. You will need to think about this before you begin collecting any data because once you have your data set, if that data set does not enable conducting the procedure for statistically analyzing the research data you need to conduct to address your research question, then your research design will fail to achieve its intended purpose. Simply having numerical data (even lots of it) is not enough to ensure that your research is statistically
Chapter 8 • Foundational Design Issues When Using Quantitative Methods   199 reasonable. This means that when designing your research, you will need to think about what types of data you have to choose from, and what will be the right type of data to collect in your study. In the next section of the chapter, we take a closer look at what we mean by types of data and which type of data will be the right type. WHAT TYPES OF DATA ARE THERE? Recall that the variables of a study are concepts of interest in that study, related in some way to addressing your research question(s). For example, for a study population comprising teachers, the variables of interest in a study focusing on that population might be age, level of formal education, level of education in which the teacher works, the teacher’s subject area(s), and so on. For each variable, you need to define all possible values that the variable might take. For example, the variable “age” might take values in the form of integers (i.e., whole numbers)14 between 18 and 75, and the variable “level of formal education” might take the values “bachelor’s degree/professional graduate certificate in education,” “master’s degree/ postgraduate certificate in education,” or “PhD/specialist certificate.” The data you collect about those variables are produced by assigning a numerical value to a variable of interest for each participant included in the study. When doing this, you are measuring the variable of interest. For example, if a respondent provides the answer “51” to the question “What is your age?” then the value “51” is assigned to the variable “age” (and thereby measuring the variable “age”). Because you need to measure all the variables that are included in your research, your research design must include the design of a measurement instrument, specifically designed to produce (1) data representing relevant information about/measures of all the variables included in the study, and (2) data that is of the type that meets the requirement(s) of the procedure for statistically analyzing the research data needed to produce the kind of information that will underpin an answer to the research question. Such a measurement instrument can, for example, take the form of a set of questions (the complete set of questions is called a questionnaire) or it could be a structured observation schedule. What type of data will be produced by measuring a variable depends on what type of values you assign to that variable. The type of values you assign to a variable is called the level of measurement of that variable. When measuring a variable, the measurement can produce either nominal, ordinal, or interval/ratio data. You need to know what differentiates these types of data from each other, because not all types of data will allow or enable all statistical procedures to be performed on those data. Therefore, when designing your quantitative research, your design must enable you to obtain the type of data that will allow you to conduct the statistical procedures that will enable you to address your research questions. Nominal Data Nominal data is the type of data produced by assigning values to a variable that take the form of labels. For example, a specific university could categorize their study programs related to physics. In your research study, these “study programs related to physics”
200  Research Design may be a variable of interest. Values related to this variable of interest can be assigned using the way that the university categorizes these physics-related study programs as being mainly about either astrophysics, geology, meteorology, material science, electrical engineering, or nuclear and particle physics. Hence the labels astrophysics, geology, and so on are the possible values assigned to the variable “study programs related to physics.” In some studies, you may apply numerical-based coding to those categorizations. For example, you might code astrophysics as “1,” geology as “2,” and so on. What the numeric coding can be used for is, for example, applying count functions embedded in software used for statistical analysis. For example, the university can collect data about what people enter into the search field on the university web page. If the number “1” is assigned to any search string15 containing the word astrophysics, statistical analysis software can count how many times the word astrophysics is included in a search string on their web pages, and thereby get a sense of the public interest in that specific academic discipline. By assigning the number “2” to any search string containing the word geology and so on, such counting provides data about the relative interest in each of the disciplines within the wider field of physics. However, even though nominal data can be coded as numbers, those numbers cannot be ranked. For example, “astrophysics” is not smaller or less than “geology” just because astrophysics is coded as 1 and geology is coded as 2. Moreover, the numbers cannot be used to perform calculations. For example, there is no average value between astrophysics and geology even though the average of 1 and 2 is 1.5. Ordinal Data Ordinal data are produced by defining the possible values of a variable in such a way that the range of possible values of a variable can be ordered, or ranked, but where the difference between each of the ranked values is not defined. In other words, ordinal data has the properties of nominal data, but in addition ordinal data offers the opportunity to order or rank the values of the variable being measured to produce such data. Lists of results are an example of ordinal data. For example, suppose Huey, Dewey, and Louie enter a contest. Dewey wins first prize, Huey wins second prize, and Louie ends up third. Then we can rank their performance as we know that Dewey performed better than Huey in that competition, and Huey better than Louie, but we do not know anything about how much better Dewey performed compared to Huey or Huey compared to Louie. Another example of ordinal data is the use of a device by the exit door of a shop with the text “We want your feedback! How satisfied are you with your shopping experience?” written on it, followed by three clickable buttons: When customers click one of the buttons, ordinal data are produced.
Chapter 8 • Foundational Design Issues When Using Quantitative Methods   201 Interval/Ratio Data Interval/Ratio data are produced by defining the possible values of a variable in such a way that the range of possible values of a variable can be ordered, or ranked, and in such a way that the difference between one value and the next on the ordered list of possible values is fixed across the complete list of values. The fixed difference between one value and the next across the total range of values enables doing calculations (involving addition, subtraction, multiplication, and division) on this type of data. For example, when measuring the variable “age,” if you assign the values 20, 25, and 35 to participant A, B, and C respectively, then you know that participant B is 5 years older than participant A, and that participant C is 10 years older than participant B. You also know that the age difference between participants A and B (5 years) is half that of the age difference between participants B and C (which is 10 years). It is only for interval/ratio level of measurement that expressions like “twice as many,” “half as much,” or “three times as often” make sense. In other words, interval/ratio data has the properties of ordinal data, but in addition interval/ratio data offers the opportunity to define a difference between each of the ranked or ordered values of the variable being measured to produce such data. For example, when counting the frequency of an event, registering test scores, measuring age or height, or collecting data about annual income, you get interval/ratio data. PUTTING IT INTO PRACTICE DECIDING ON LEVEL OF MEASUREMENT IN PRACTICE Think of a situation where a sports journalist is asked to rank NBA (National Basketball Association in the USA) players based on numerical data such as the number of assists per game, points per game, steals per game, and blocks per game. The journalist is asked to provide a list of the top 5 players. If, for example, player X has an average of 32.0 points per game, player Y has an average of 28.0, and player Z has an average of 26.0, the difference in average points per game between player X and player Y is 4.0, and the difference in average points per game between player Y and player Z is 2.0. Moreover, the difference between average points per game between player X and player Y is twice that of players Y and player Z. The fact that we can quantify the difference between the number of assists, points, steals, or blocks per game across the players in question, and that statements such as “the difference between average points per game between player X and player Y is twice that of players Y and player Z” make sense, tells us that the data used by the journalist to provide the list of the top 5 players are interval/ratio data. On the other hand, the data produced by the journalist’s list of top 5 players is ordinal. This is because you know that the player ranked as number 1 is considered better than the player ranked as number 2, but you are not able to quantify how much better the player on top of the list is compared to the player on second place. What makes nominal, ordinal, and interval/ratio data different from each other is summed up in Figure 8.2:
202  Research Design FIGURE 8.2 ■ Three Types of Data and What Makes Them Different From Each Other Examples: Eye color, subject matter of study program, marital status Examples: Student letter grade (A-F), football team rankings, customer satisfaction ratings Examples: Years, days, age, height, probability of something To sum up, we have established that the data you collect about the variables of your research are produced by assigning a numerical value to each variable of interest in the study for each participant included in the study. When doing this, you are measuring the variable of interest. Moreover, the form that such measurement takes defines what type of data will be produced by that measurement, and therefore defines which procedures for statistically analyzing those data can be conducted in a statistically reasonable way. In the next section, we take a look at two examples that illustrate this point. Activity Thinking About What Type of Data You Need Before Collecting Any Data Figure 8.2 above demonstrates that interval/ratio data can be treated as ordinal data, and ordinal data can be treated as nominal data—but not the other way around. Because different types of data allow different types of statistical analyses to be performed on those data, you need to think through the implications for your study design related to different types of data before you collect any numerical data. This is because once you have collected your data, there is no way of changing, say, ordinal data to interval/ratio data, or nominal data to ordinal data. In other words, you are facing a problem if your chosen procedure for statistically analyzing the research data assumes you will have interval/ratio data, while you have collected ordinal data. This problem is that your research design, and the data and analyses arising from that design, will not be credible in terms of you being able to reach statistically reasonable conclusions about your research questions.
Chapter 8 • Foundational Design Issues When Using Quantitative Methods   203 With this in mind, think about what type of data you would need to collect to answer the following questions and justify your choices: 1. What are the reasons given for absence from work among business executives? 2. Where do people of different age groups prefer to take overseas holidays? 3. Does the level of stress in patients about to undergo surgery depend on any specific type of nurse intervention? If you have already developed research questions of your own, use them instead. HOW DIFFERENT TYPES OF DATA ENABLE DIFFERENT TYPES OF KNOWLEDGE Data differ in terms of what procedures for statistically analyzing research data, and therefore what type of knowledge, they enable. We will illustrate what we mean by this by taking a closer look at two examples inspired by Saris and Gallhofer (2014). The examples highlight how types of data affect what you potentially can learn (as well as what you definitely won’t be able to learn) about the research question(s) that you are designing your research to answer. Consider this question: How does the status of different occupations differ compared to the occupational status of a schoolteacher? In what follows, we provide examples of two different ways of collecting data that might be relevant for answering this question. Version 1 produces interval/ratio data and Version 2 produces ordinal data. Version 1: Occupations differ with respect to status. We ask you to estimate the status of a series of occupations. If we give the status of the occupation of a schoolteacher a score of 100, how would you evaluate the other occupations? For example, if an occupation that, in your opinion, is twice as high in status compared to a schoolteacher, score it 2 × 100 (i.e., 200). If the status of the occupation is half that of a schoolteacher, divide 100 by 2 (which gives 50). What is the status of a General physician? Your answer: ____ Real estate agent? Your answer: ____ Bus driver? Your answer: ____ Carpenter? Your answer: ____ Version 2: Occupations differ with respect to status. We ask you to evaluate the status of a series of occupations. The reference point for you to use when making your evaluation is the status of the occupation of a schoolteacher. For each occupation on the list, we ask you to evaluate the status of that occupation as either lower than, the same as, or higher than that of a schoolteacher.
204  Research Design Lower Same Higher General physician Real estate agent Bus driver Carpenter The way that status is measured in Version 1 produces interval/ratio data. Because interval/ratio data has the properties of ordinal data, you will be able to find how many respondents evaluate the status of a specific occupation as lower than or the same as or higher than that of a schoolteacher simply by treating any value lower than 100 as having a lower rank than the value 100, and any value higher than 100 as having a higher rank than the value 100. Moreover, the way that status is measured in Version 1 allows performing calculations involving addition, subtraction, multiplication, and division on the data, and therefore it is possible to, for example, quantify the average opinion about the status of each occupation compared to that of a schoolteacher, across the group of research participants. This facilitates, for example, developing a “status scale” reflecting the average opinion of the status of each occupation compared to that of a schoolteacher. The way that status is measured in Version 2 produces ordinal data. This is because the values that the variable “status” can take in Version 2 of the question (“lower than,” “same as,” and “higher than” that of a schoolteacher) can be ranked; there is a logical order to them (“lower than” is less than “same as” is less than “higher than”). Therefore, the data produced by the way of measuring status demonstrated in Version 2 of the question enables you to say something about how many respondents evaluate the status of a specific occupation as lower than or the same as or higher than that of a schoolteacher. However, you cannot say anything about how much lower or higher the status of a specific occupation is compared to that of a schoolteacher. Neither are you able to calculate average opinions about the status of each occupation compared to that of a schoolteacher. If this is what you need to say something about in order to answer your research question(s), then you will need interval/ratio data for the statistical analyses you would need to do. These examples illustrate the point we made previously, namely that not all types of data will allow or enable all statistical procedures to be performed on those data. Therefore, when designing your quantitative research, your design needs to enable you to obtain data of a type that meets the requirements for the procedures for statistically analyzing the research data you will undertake in order to enable you to answer your research questions. TIP BE AWARE YOU MAY NEED TO MANEUVER BETWEEN WHAT IS IDEAL AND WHAT IS FEASIBLE Collecting data meeting the data related requirements of procedures for statistically analyzing that research data is not always a clear-cut issue. This is because collecting data of the type able to enable the type of statistical analyses needed to answer your research questions often involves navigating trade-offs between what is ideal data wise, and what is feasible.
Chapter 8 • Foundational Design Issues When Using Quantitative Methods   205 Why Making Sure You Collect Data of the “Right” Type Is Not Enough to Ensure That Your Research Design Is Statistically Reasonable It is possible to design your research to obtain the right type of data. However, if that right type of data is obtained from the wrong people, for example, the wrong study population in light of your variables, or from an inadequate sample from the right study population, then it will not be the right data for your study. To design your study in a credible way, all parts of that study design must be statistically reasonable. The design is only as credible as its weakest link. No part of designing your research can be seen in isolation from any other part. This is because the credibility of your quantitative research design, and findings, lies as much in the way that the numerical data has been collected and the form that that data takes as it does in the statistical procedures used to analyze that data. TIP TO USE STATISTICS WELL, YOU DON’T NEED TO BE A STATISTICIAN IF YOU SEEK HELP WHEN YOU NEED IT Remember: Statistics are a means in the design and conduct of research using some form of quantitative approach, not an end in themselves. Therefore, you do not need to know every detail of every concept involved in every type of procedure for statistically analyzing the research data that you are planning to use. What you do need is sufficient knowledge that will enable you to avoid making major, irreparable mistakes in the design of your research—mistakes that would ultimately cause your research design to fail to achieve its purpose. If you don’t have that knowledge, know that you will need to obtain that knowledge and how you might do so. Discussions in textbooks about statistics, as well as in textbooks about statistics in relation to doing quantitative research, tend to be technical. Therefore, if you 1. are not very familiar with statistics, or 2. feel you need to check that your thinking about statistical data analysis and its outcomes is on the right track, we suggest that you seek expert help with the statistical procedures involved in selecting the participants of your research, as well as with the procedure(s) for statistically analyzing the research data that will form part of your developing design. A statistician can assist you sorting out what is essential for you to know, and thereby help you avoid feeling like drowning in technical details that in themselves are not really helping you develop a well-thought-through research design. To reiterate, a strong word of advice from us is to seek timely help from a statistician. CONCLUSIONS In this chapter, we have focused on aspects of a quantitative research design that will underpin statistically reasonable answers to the research question(s) your research is designed to address. We have seen that designing quantitative research that is statistically reasonable includes establishing clear and explicit links in the research design between your research
206  Research Design questions, your statistical analyses, the data you collect, and the sample or population that you collect your data from. Establishing these links is the result of a series of decisions about each of these aspects. We capture this series of decisions in Figure 8.3 below: FIGURE 8.3 ■ Establishing Links Between Aspects of a Quantitative Research Design Data: The data of the study is the raw material for the statistical analyses to be done on those data Sample: A subgroup of the study population. The people in the sample are the ones that provide the data of the study Research questions: The purpose of designing the research in the first place is to arrive at credible answers to the research questions Statistical analysis: The outcomes of statistically analyzing the data of the study will provide the base for the answers to the research questions the study is designed to address Study population: Doing quantitative research, a general aim is to learn something about large groups of people. Therefore, the design must enable the credible answers to the research questions being generalizable to the study population—also in cases when data are collected from a (relatively small) subset of that population After reading the chapter, if you understand 1. how the aspects included in Figure 8.3 are linked in a quantitative research design, 2. the importance of establishing these links clearly and explicitly, 3. what you need to think about to be able to include such clear and explicit links in your developing research design, 4. that if you do not establish these links in your research design, you may end up in a situation where you get some of your research design completely right but still end up with a design that is completely wrong and that will not provide credible answers to your research questions, then we have achieved our goal when writing this chapter. In addition, because you do understand these four points, now you are ready to move to the next phase of your thinking about designing your quantitative research. This phase involves thinking about how you will actually put your research design into practice and do the research. To “do” your research you will first need to design a measurement instrument. This is an instrument that enables you to measure the variables of your research. Those measures provide the data for the statistical procedures that can provide a credible basis for answering your research questions.
Chapter 8 • Foundational Design Issues When Using Quantitative Methods   207 Once you have your measurement instrument, you will need to put that measurement instrument into practice and use it to collect the numerical data you need. This will involve thinking about how you will approach and interact with the people included in your proposed sample when actually collecting data from them using that measurement instrument. Therefore, in Chapter 9, we put the spotlight on the more operational aspects of putting quantitative research design into practice. We do this by taking a closer look at what you will need to think about when designing a measurement instrument, and then using it to measure your variables of interest so that you can arrive at credible answers for your research questions. SUMMARY OF KEY POINTS • For a quantitative research design to enable credible research findings, that research design needs to be statistically reasonable. • A statistically reasonable quantitative research design is one in which there are clear and explicit links between the numerical data you propose to collect (who from, type, and amount), the type of statistical analyses that you propose to do, and the research questions that you are asking. • In addition, a statistically reasonable quantitative research design is one in which there are clear and explicit links between the group of people to which you want the answers to your research questions to apply and the (often smaller) group of people from which the numerical data will be collected. • These links are established by adhering to probability-based principles and rules of statistics that enable you to make claims about the generalizability of your findings from the sample to the wider population from which that sample was drawn. • A representative sample is a scientific ideal obtainable by true probability sampling, but true probability sampling may not be feasible. In practice, sampling procedures often involve a degree of non-probability sampling (such as convenience sampling). • Only when knowing which alternatives there are to choose from, and their strengths and weaknesses in terms of enabling or impeding achieving the goal of the research being designed, are you in a position to make informed and thought through choices about which procedure(s) for statistically analyzing the research data and which sampling strategies to incorporate in that design. • The credibility of a quantitative research design is only as strong as how clear and explicit the links are in the research design between your research questions, your statistical analyses, the data you collect, and the sample or population that you collect your data from. • Therefore, the mechanics of using statistical techniques or procedures as part of a research design using a quantitative approach are not the same as designing statistically reasonable research.
208  Research Design KEY RESEARCH-RELATED TERMS INTRODUCED IN THIS CHAPTER estimate (of the numerical value of a population characteristic) external validity generalizing in quantitative research hypothesis hypothesis testing power of a hypothesis test random selection process representative sample sample/study sample sampling strategy statistical validity statistically reasonable study population variable SUPPLEMENTAL ACTIVITIES 1. The aim of this activity is to explore how it is possible to get some of your research design completely right but still end up with a design that is completely wrong. To begin your exploration of this question, we would like you to write down what you will need to think about when building a house. Your list probably includes points related to the following areas: • Some sort of blueprint or plan—the aim of doing the building, knowing what you are building, and what you want to achieve at the end of the building process. • Materials—What materials will you need to be able to build this? Type and amount. This will vary depending on the planned house—how big it is planned to be, what you want it to look like, the funds and resources you have available to you. Where will you get these materials from to make sure they are up to the standard required? • The means of putting those materials together—knowing the building procedures that need to be followed when using those materials that will enable the house to be built correctly. If you fail in the execution of, or make poor or inappropriate decisions about, any of these areas, then it is likely that you will not achieve the aim of building the house. For example, you might have a wonderful blueprint, but if you choose the wrong materials for building what is in that blueprint, then you will not achieve your aims. Similarly, you might have a wonderful blueprint, choose what can be suitable materials for that blueprint, but not know what the building procedures are that should be followed to ensure that the house is built properly. Therefore, while you have some things completely right (e.g., blueprint and materials), others may be completely wrong (the way you put them together). Now try this: When reading the above points, make the following substitutions: Some sort of blueprint or plan → Research question Materials → Types of data and who you obtain it from The means of putting those material together → Procedure for statistically analyzing the research data
Chapter 8 • Foundational Design Issues When Using Quantitative Methods   209 Doing so should give you insights into what we mean by saying that when designing quantitative research, it is possible to get some (even most) of it completely right (e.g., the research question is well developed and the planned analytical procedures are statistically reasonable) but still end up with a design that is completely wrong (e.g., the numerical data collected is of the wrong type and therefore makes the rest of the research not credible). 2. Is your research question well enough developed? Consider your research question and answer the following questions about that research question: a. b. c. Is who or what will need to comprise the population for the study clear from your research question? If so, who or what is it? Does the research question make clear what the main variables of interest in the study are, that is, which aspects, characteristics, or features of the study population you want your claims about the study population to be about? If so, what are they? Does the research question make clear what type of procedure for statistically analyzing the research data will enable answering that question? If yes, what is it and why? What type of data will you need to be able to perform that analytical procedure? Why? If your answer is no to any of the above questions, think about why you answered no. This will reveal ways in which you will need to develop your research questions further. Then return to your original research question and, using this thinking, develop that question until all of the above questions can be answered with yes. This example highlights the type of reflexive and iterative thinking that you will need to do when designing your quantitative research study. FURTHER READINGS Gorard, S. (2003). Quantitative methods in social science: The role of numbers made easy. Continuum. Stockemer, D. (2019). Quantitative methods for the social sciences: A practical introduction with examples in SPSS and Stata. Springer Nature. NOTES 1. Participant here means a person from whom data is collected. You will also see participant used in qualitative research with a slightly different emphasis, namely as a co-participant with the researcher in the research. 2. See Chapter 5 for an extended discussion of the nature and purpose of applying a quantitative approach in research. 3. “The size range of the algae spans seven orders of magnitude. Many algae consist of only one cell, while the largest have millions of cells” (taken from https://www. britannica.com/science/algae). 4. You will see the theory, methods, and practice of generalizing from a sample to a population being called statistical inferences in textbooks of statistics and quantitative research methods.
210  Research Design 5. This is a consequence of mathematical principles derived from probability theory. For example, suppose you flip a fair (i.e., getting heads and getting tails are equally likely outcomes from flipping the coin) coin 10 times (i.e., 10 random selections). Probability theory tells you to expect five head and five tails, but probably you would not get that exact result. You could by chance end up with eight tails and two heads, for example. However, by increasing the number of tosses (i.e., increasing the number of random selections) to 1,000, you are more likely to get something close to 500 heads and 500 tails than getting 800 tails and 200 heads. 6. From our discussion in Chapter 3, you will remember that the variables related to your research questions are “something of interest that varies in relation to either the people or the places that you are studying.” 7. For example, of you want to find the average height of the people in your study population, and all of them have the same height, a sample of one is sufficient to find their average height. However, if the variation in height is large among the people in the study population, you will need a larger sample to make sure that the first few you select are not extreme cases, and to make sure that you cover the variety in heights. 8. Quota sampling means that the population is divided into subgroups, where who belongs to each subgroup is defined, for example, by a combination of sociodemographic factors. Then such a subgroup of the study population is chosen on a non-random basis to be the subset of the study population from which sampling will take place. 9. In Chapter 9, we will define what we mean by a variable in more specific terms. 10. Instructor-centred teaching is characterized by the teacher taking the role of being a master of the subject matter. Learners, on the other hand, take the role of passive and copious recipients of knowledge from the teacher. Practicing learner-centred teaching methods, the teacher takes the role of a co-learner alongside the students and allows for discussion about, and inquiry of, the content of the subject matter. 11. When reading about experimental designs, you will also come across the terms independent variable, dependent variable, confounding variable, extraneous variable, control, and many more. These are all terms you need to understand the meaning of to be able to design experimental research. 12. Corroborate: to support with evidence (from https://www.merriam-webster.com/ dictionary/corroborate). 13. This means that the exact way of testing a specific hypothesis depends on the nature of that hypothesis (e.g., testing a hypothesis on relationships between specific characteristics or traits involves different statistical analyses than does testing a hypothesis on differences in characteristics or traits across subgroups of a population) and the nature of the data collected. 14. An integer is a number which is not a fraction. 15. By a search string we mean any combination of words, numbers, and symbols entered into a search field.
9 COLLECTING DATA USING QUANTITATIVE METHODS PURPOSES AND GOALS OF THE CHAPTER In Chapter 8, we highlighted that arriving at credible answers to research questions using some form of quantitative approach depends on a statistically reasonable research design. A statistically reasonable quantitative research design is one in which there are clear and explicit links between (1) the numerical data you propose to collect, the type of statistical analyses that you propose to do, and the research questions that you are asking; and (2) the group of people to which you want the answers to your research questions to apply and the (often smaller) group of people from which the numerical data will be collected. Therefore, once you have decided on your statistically reasonable research design, you are going to need to think about how you will actually obtain that numerical data from those people. What method will you use to collect that data? The issues addressed in quantitative research are often about abstract and complex concepts. Those concepts are what make up the variables of the study. Consequently, making sure that the data you collect can underpin credible answers to your research questions includes thinking about how to 1. measure the variables of the study to make sure that the numerical data arising from those measurements represents relevant information about the variables that your research question(s) are about; 2. design, or select an existing, measurement instrument that can enable you to obtain that numerical data in a reliable and valid way; 3. put that measurement instrument into practice in a way that optimizes both the response rate of your study and the credibility of any findings based on those responses. Exploring these three points is what this chapter is about. However, a word of caution before we begin. Chapter 8 provides the backdrop for many of the issues we take a closer look at in this chapter. If you have not read Chapter 8, we strongly encourage you to do so before reading this chapter. The goals of the chapter are to • Establish what we mean by valid research findings, as well as expand the understanding of what underpins valid research findings beyond the issues of statistical validity and external validity that we discussed in Chapter 8. 211
212  Research Design • Make clear that arriving at credible research findings when using some form of quantitative approach requires that the dataset is made up of credible information about the variables identified as relevant for addressing the research question(s). • Emphasize that obtaining data in the form of credible pieces of numerical information about complex, and most often abstract, variables requires making those variables measurable. • Remind you that the data produced by measuring variables must be of the type needed to be able to perform the procedures for statistically analyzing the research data that you plan to use. • Identify issues to think about when the measurement instrument is put into practice when collecting data from the participants1 of the study. • Make clear that assessing the credibility of the findings of a quantitative research study involves assessing the effects of the choices you make about all aspects of the research design, as well as assessing the degree to which those decisions are consistent and in keeping with each other. MEASURING VARIABLES TO ENABLE VALID RESEARCH FINDINGS When reading about quantitative research methods, you will come across the concept of validity. Validity is a complex concept, because the term validity focuses on different things depending on what we are talking about when using it. In short, we can say that research (or a research finding) is valid when it is credible, well-founded, reasonable, justifiable, and defensible. In Chapter 8, a large part of the discussion focused on considerations related to the following: a. whether the statistical analysis procedures that are part of a research design can provide the kind of information needed to answer your research question(s); b. whether the type of data you propose to collect will enable those statistical analyses to be performed; c. if your research design includes the use of a sample, whether what you learn about that sample by statistically analyzing the data collected from the people comprising it is generalizable beyond that sample (preferably to the study population as a whole). Points (a) and (b) are about statistical validity, and point (c) is about external validity. Both statistical and external validity are facets of the overall concept of validity, but in themselves they are not sufficient to ensure valid, that is, credible, well-founded, reasonable, justifiable, and defensible research findings. What remains to think about related to the concept of validity as it applies to quantitative research are the parts of validity related to the methods used to obtain the data. For example, if you are using some sort of survey as your method of choice in your study design, then how will your survey related data actually be collected? We discuss this in detail in later parts of the chapter.
Chapter 9 • Collecting Data Using Quantitative Methods   213 TIP ESTABLISHING THE MEANING OF SOME OF THE WORDS USED IN THIS CHAPTER Recall from Chapter 8 that your research design must include the design of a measurement instrument. That measurement instrument must be specifically designed to produce numerical data that provides relevant information about measures of all the variables included in the study. Moreover, the numerical data produced by that measurement instrument must be in a form able to meet the requirement(s) of the statistical analyses that you have identified you need to do in order to answer the research question(s). Individual measurement items, each designed to measure (an aspect of) one of the variables included in the study, collectively make up the measurement instrument included in your research design. The forms that measurement items can take include • questions (the complete set of questions, i.e., the measurement instrument, is then called a questionnaire); • statements for which the respondents are asked to provide their response, usually by selecting at least one option from a set of predetermined response options; • observation items (the complete set of observations is then a structured observation schedule). Other types of measuring instruments include web scrapers2 and technical instruments normally used in science labs (such as movement sensors). Are You Measuring What You Think You Are? One common method used to collect data from a group of respondents is a quantitative survey. A survey is a research method used to collect data from a group of respondents by way of asking people a set of predefined questions or asking them to respond to a set of predefined statements. If you are doing a survey, one of the things you will need to consider is how the measurement items making up your measurement instrument in that survey are phrased and presented to the respondent so that you can be sure that the items measure what they are intended to. If an item is not clear, then what also will not be clear are any measurements gained from it. Sure, you will have a measurement but maybe not of what you think. We illustrate this point by looking at an example inspired by Blair et al. (2014): Assume one of the items in a measurement instrument is the following: Select the option that best represents your opinion about this statement: “The criminal justice system is performing well.” 1 2 3 4 5 6 Strongly disagree Moderately disagree Mildly disagree Mildly agree Moderately agree Strongly agree One of the problems here is that respondents may have different understandings of what is meant by “the criminal justice system.” For example, a respondent who
214  Research Design understands the criminal justice system to include the police, the courts, and the prisons will provide an answer reflecting their overall impression of the performance of that system—the police, the courts, and the prisons. On the other hand, a respondent who understands the criminal justice system as being synonymous with the courts will provide an answer based on, and reflecting, their opinion of the performance of the courts only. Moreover, respondents may interpret “performing well” in very different ways. For example, for one respondent, a criminal justice system that performs well could mean one that can investigate and adjudicate criminal offenses in a timely and effective manner. For a different respondent, a criminal justice system that performs well could be one that is free of improper government influence. Therefore, it is impossible to know what exactly is being measured by this item if what the criminal justice system is, is not clear. You will have no way of knowing what understanding of this term participants of your study had when they responded to this item. Therefore, it will not be possible to draw valid conclusions from the data collected from this item. What this example highlights, is that the methods used to actually collect the numerical data for your study, and what is actually measured when doing so, enable valid research findings if those measurements actually measure what they claim to. In the next section, we take a closer look at what this means in practice when we are designing our research. Or put another way, what will you need to think about to make sure that the measurements you make actually do measure what they claim to? MAKING SURE THE MEASUREMENTS YOU MAKE MEASURE WHAT THEY CLAIM TO You will recall from Chapter 8 that for each of the variables included in your study, you need to define all possible values that the variable might take. For example, the variable “eye color” can be assigned the values of “brown,” “hazel,” “blue,” “green,” “gray,” and “amber.“3 Whenever a value of a variable is assigned to a participant of the study, the variable is being measured. For example, when a participant responding to the question “What is your eye color? Please select one of the options: 1. brown, 2. hazel, 3. blue, 4. green, 5. gray, 6. amber,” selects 1. brown, then 1 is the measure of the variable “eye color” for that respondent. When designing your research, you will need to develop a way of measuring each of the variables of interest in your study. For some variables (such as “eye color” or “age”), you will only need one measurement item—what is your eye color or what is your age—to obtain that measure. For other variables (such as “motivation” and “achievement”), you will need a collection of measurement items to be able to measure those variables in a credible way. This is because the concepts of motivation and achievement are more complex and made up of a number of different aspects or factors related to them. This means that before you can decide what type of measurement instrument to use in your study, and exactly what the measurement items making up that instrument will look like, you will need to have asked yourself, and reached a decision about, the following questions: • What is my understanding of the variable in question? • How can I define my understanding of the variable in such a way that it can be measured?
Chapter 9 • Collecting Data Using Quantitative Methods   215 • Given my understanding of the variable, which values can that variable take? Are these values the only possible set of values? • How can I make sure that I measure what I want to measure? We illustrate the thinking you will need to do related to the above questions by using two examples of putting this thinking into practice. Both examples illustrate why it is important to be clear (to yourself and to others) about how you understand and operationalize your variables of interest. Example 1 Suppose your variable of interest is “time spent on social media platform X” and you choose to use a web scraper as the instrument for data collection. You will then need to decide which data the web scraper needs to “fetch” in order to measure your variable of interest—time spent on social media platform X. That will require you to be clear about what you mean by “time spent on social media platform X.” Do you understand “time spent on social media platform X” as equal to “time dedicated to being active (i.e., liking, sharing, posting) on social media platform X”? Or is simply being logged on to platform X enough to qualify as “spending time”? This is an important decision to make, because if your understanding of what “spending time” means is not clear, how can you go about measuring that variable? Or put another way, how can you design a web scraper that fetches data that are relevant for answering your research questions? If you are not clear about what your understanding of the variable “time spent on social media platform X” is, then you are not able to decide what data the web scraper needs to fetch. In other words, you are not able to design a web scraper that will produce relevant data for your study. For example, a web scraper designed to fetch data about when and where people log on to social media platform X, and for how long they are logged on, does not provide relevant data about the variable “time spent on social media platform X” if your understanding of that variable is that spending time means engaging with the content on that social media platform (by liking, sharing, posting, reading others’ posts and so on). Consequently, if you go ahead and put a web scraper into practice without being clear about what data that scraper is supposed to fetch, then you cannot really know what the data produced by that scraper represents measures of. In other words, you are not able to assess whether the data produced by the scraper is relevant in terms of enabling credible answers to your research question(s). Example 2 Suppose your variable of interest is “learning-friendly classroom environments.” You are especially interested in the aspect of that variable that is about peer abuse at schools. Even more specifically, you are interested in how teachers follow up on peer abuse in classrooms. Now suppose you decide to use standardized observations to collect your data. If you are not clear about what your understandings of the variables “peer abuse” and “following up on peer abuse” are, then you will not be able to decide what to look for while observing. In other words, you are not able to decide what to include in your observation schedule and what to leave out.
216  Research Design For example, assume your understanding of the variable “peer abuse” is clear, and you therefore are able to decide what type of physical actions, spoken comments, or behaviors toward peers to look for and register in your observation log while making your observations. If, when making your observations, there is an episode of peer abuse in the classroom, and that episode is overlooked by the teacher at the specific time it occurs, does that mean the episode is not followed up on by the teacher? To answer this question, you need to make clear your understanding of “followed up on by the teacher.” If “follow up on” means taking action related to the peer abuse when it takes place, then you will register overlooking an episode of peer abuse at the time it actually occurred as not following up on it. However, it may be the case that the teacher’s experience is that acting upon peer abuse in the heat of the moment just makes the situation worse in the long run. Therefore, the teacher takes note of the episode when it takes place but does not take action related to it until later in the day. Does the action taken by the teacher at this later point in time qualify as following up on the peer abuse that took place in the classroom earlier in the day? If you do not know exactly what qualifies as following up on, and what does not, then you will not be able to define in your observation schedule what it is that you are supposed to look for related to this variable of interest. If you go ahead and do your observations anyway, you will not be able to precisely define what it is that you have measured related to following up, because you are not clear what following up means when making those observations. What Can We Conclude From These Examples? What both of these examples demonstrate is the importance of being clear about how you understand the variables of interest in the study. They highlight that if you are not clear about what your variable of interest actually is, then it is impossible for you to develop a way to measure your variables of interest in a way that will make sure that what is being measured is what you actually need to measure about them in order to answer your research question(s). If you cannot be sure of what you actually have measured, then the data produced by those measurements may not be relevant to your research questions at all. Therefore, the data will not enable credible answers to those questions. On the other hand, when you have made clear how you understand each of the variables included in your study, you will know exactly what it is about those variables that is of interest to you. Therefore, you will be able to develop a way to measure those variables. This is because when you know what it is about each variable that you are interested in, you are able to decide what to measure about the variable in order for the measurements to produce relevant pieces of numerical information. In the next section, we take a closer look at the thinking you will need to do about your variables of interest in order to make them measurable. TAKING A CLOSER LOOK AT WHAT ENABLES VARIABLES TO BE MEASURED How do you turn a variable such as “motivation” or “achievement” into something that you can measure that is related to your research question(s)? The first thing you will need to think about, and then do, is identify what you see as relevant about that variable in terms of what you will need to know to be able to answer your research questions. When you decide upon a specific way of understanding a variable of interest, you are developing what is called the construct for that variable that you will use in your study.
Chapter 9 • Collecting Data Using Quantitative Methods   By a construct, we mean a specific way of elaborating on, or understanding, an abstract concept (Black, 1999). For example, we all have an intuitive feel of what friendship is, but to investigate the concept friendship, and treat it as a variable in a specific study, we need to state explicitly what we mean by friendship. This involves us identifying the aspects, factors, or indicators we understand to be key elements of friendship. What you identify variables to be about, and what it is about them that is relevant in your specific research study, will be informed by the disciplinary and theoretical traditions in which the research you are designing is embedded. Consequently, deciding on the constructs for the variables of your study anchors your study to the total body of knowledge related to the area of research to which the study you are about to design will add even more knowledge. Therefore, assessing the credibility of how a variable is understood, includes asking yourself questions such as these: • Will experts in my area of interest agree with my 217 FIGURE 9.1 ■ The Process of Developing Operational Definitions From Abstract Concepts Variable Often abstract, and specific meaning is not defined Construct The variable is refined, and you develop a specific understanding of it understanding of the variable? • Are there other possible ways of understanding this variable? • Is my understanding of the variable congruent with the dominant theoretical and research traditions within my discipline? When you identify quantifiable factors, or indicators, of a construct and specify which quantifiable factors or indicators to include in the measurement of the variable, you are developing what is called an operational definition of the variable (see Figure 9.1). It is the operational definition of a variable, informed by the construct of that variable, that enables collecting pieces of information about that variable that are considered relevant to enable learning something about that variable. Operational definition Observable and quantifiable factors used to measure/describe/ identify the construct Source: This illustration is developed by us, but is inspired by Black (1999). TIP NOT ALL VARIABLES LEND THEMSELVES TO MEASUREMENT For some variables, identifying quantifiable factors that would make up an acceptable and appropriate way of measuring that variable would be a huge, difficult, and possibly endless endeavor. This is because “not all concepts lend themselves to measurement
218  Research Design and consequently will not be good variables to try to quantify—for example effective teaching, quality of life, and good research” (Black, 1999, p. 38). Therefore, if you do find yourself identifying concepts such as “good research” as the variables of interest for your study, you will need to take a step back to think through your research questions and consider (1) whether your research questions are well-enough developed, that is, what exactly is it you want to know about related to “good research”; or (2) whether the variables you have identified are the ones most relevant to answer those questions. HOW TO MAKE ABSTRACT VARIABLES MEASURABLE Understanding the variable → construct → operational definition process outlined in Figure 9.1 at a conceptual level is one thing; however, it is quite another to put this process into practice. To help you think about what this process entails in practice, in this section we provide an example of developing an operational definition of a specific variable. To do so we will use this exemplar research question: How is friendship related to well-being among children with attention-deficit hyperactivity disorder (ADHD)? In this research question, the variables of interest are “friendship” and “well-being.” To make these variables measurable, you will need to follow these steps: Step 1: Clarify Your Understandings (i.e., Develop the Constructs) of the Variables To measure the variable “friendship,” you must start by clarifying your understanding of friendship in the context of your research. For example, one understanding of friendship draws on Aristotle’s idea of three kinds of friendship, namely utility (established because the one you befriend is useful to you), pleasure (established because of the pleasure you gain from the one you befriend), and the “perfect” friendship (established because of the other person being who they are) (Pangle, 2003). However, this is not the only possible understanding of the variable “friendship.” There are many other ways that friendship has been conceptualized and therefore understood. For example, friendship can be understood as “a close relationship between two children that is mutual and reciprocal” (Mikami, 2010, p. 181). Within this understanding of what friendship between children is, the variable “friendship” is not a uniform entity. Rather, “friendships vary in quality, in stability, and in the adjustment of the friend” (Mikami, 2010, p. 182; see also Hartup, 1995, 1996). Therefore, this understanding of the variable “friendship” identifies quality, stability, and adjustment as key factors, aspects, or indicators of friendship. Consequently, what you will need to think about, and decide upon, is which understanding of friendship you will adopt in your study and why. When refining and developing a specific understanding of the abstract variable “friendship” (or an aspect of that variable, such as the aspect “adjustment”), you are developing a construct of that variable (or an aspect of that variable). You will undertake the same process with the variable “well-being.” For example, your interest in the area of the well-being of children with attention-deficit hyperactivity disorder (ADHD), and your reading of the literature related to that area, may suggest that
Chapter 9 • Collecting Data Using Quantitative Methods   219 within the population of children with ADHD, peer rejection negatively impacts their well-being. For example, you find a study that reported this finding: “Peer rejection, if left unchecked, leads to a variety of problems . . . that ultimately place the child on a perhaps irreversible negative trajectory” (Hoza, 2007, p. 660). Moreover, other studies suggest that “stable, high-quality friendships [have the potential] to buffer” (Mikami, 2010, p. 181) such negative outcomes. Given this, your understanding of “well-being” might focus on effects of peer rejection. Consequently, the constructs related to the variables “friendship” and “well-being” for your study could be (1) that quality, stability, and adjustment are understood as the key elements of “friendship”; and (2) that “well-being” is understood in the context of peer rejection. Step 2: Identify Quantifiable Factors That Will Enable Measuring the Constructs To enable measuring the concepts or variables of interest (in this case “friendship” and “well-being”), you will need to develop operational definitions from the constructs of those variables. In other words, you will need to identify quantifiable factors, or indicators, of the variables “friendship” and “well-being” and specify which quantifiable factors or indicators to include in your study. The construct of the variable “friendship” discussed previously in Step 1 above identifies quality, stability, and adjustment as the key elements of that variable. Therefore, to develop an operational definition of this variable, you will need to specify the quantifiable factors or indicators for each of these three key elements. Once again, you will need to draw on the work of others when doing so. For example, from the studies by Mikami (2010), and Gülay and Önder (2013), you will be able to identify the following factors or indicators of the element quality of friendship: • validation (acknowledging another person’s emotions, values, thoughts, and so on), • care (showing kindness and concern for others), • trust (believing that another person is honest and will not do anything deliberately to harm you or put you at risk), • conflict (strong disagreement with someone, or argue with someone, about something important), • antagonism (hostility or aggression between two people), and • competition (when two or more people are trying to get something which not everyone can get). Therefore, the six quantifiable factors above will make up the part of the operational definition of friendship covering the key element quality of friendship. Whether the above list of six quantifiable factors of the key element quality of the variable “friendship” is suitable in terms of enabling valid research findings to emerge from the data produced by including these factors in the measurement of the variable “friendship,” depends. What it depends on is how well the list covers all relevant factors of the element quality.
220  Research Design Assessing whether the factors on the list cover the content of the key element quality is done by comparing the list to the disciplinary and theoretical traditions in which your study is based. If peers and experts within the disciplinary and theoretical tradition in which your study is embedded accept these quantifiable factors as necessary and sufficient for measuring the key element quality of the variable “friendship,” then this part of the operational definition of the variable “friendship” is assessed as contributing to the validity of the research. You will see this understanding of validity being referred to as content validity. Making sure that the quantifiable factors included in an operational definition of a variable cover all relevant elements of that variable is important because, if a factor is missing in an operational definition, something will be missing in the data produced by the measurement items developed from that incomplete operational definition. In turn, this will lead to outcomes from statistical analyses performed on those data which do not tell the whole story about the variable in question. Therefore, any claims about the variable in question based on such outcomes are not well-founded (i.e., the validity is threatened). For example, suppose you only include validation, care, and trust as the quantifiable factors used to measure quality of friendship among children with ADHD. Then if you conclude that the quality of friendship has the potential to increase well-being among those children, you implicitly ignore any potential negative effect on well-being from conflicts, antagonism, and competition (the other factors or indicators of the element quality of friendship) among children with ADHD. In other words, the claim you make about how quality of friendship relates to well-being among children with ADHD is based on data covering only half of the indicators of quality of a friendship that may affect well-being. Once you have identified the quantifiable factors related to quality, you will then follow the same process to identify quantifiable factors related to the other key elements, namely stability and adjustment, that you have identified as being part of the construct of the variable “friendship.” In Figure 9.2 below, we provide a summary of quantifiable factors of all three key elements of the variable “friendship” according to the construct of that variable discussed previously in Step 1 above. However, even when you have done all this, you still have more thinking to do. This is because to be able to answer the research question How does friendship relate to well-being among children with ADHD? you will need to develop an operational definition for the variable “well-being” as well. You will recall that you already have decided on a construct for the variable “well-being”—namely that the variable “well-being” is understood in the context of peer rejection. Therefore, the operational definition of well-being would be developed by identifying quantifiable factors enabling peer rejection to be measured. Developing Measurement Items to Actually Measure the Variables When you have developed operational definitions for each of the variables included in your study, you are ready to move on to a more operational level. This is because simply developing operational definitions enabling the variables of your study to be measured is not enough. You still need to actually develop the items that will, when combined, make up the measurement instrument of your study. Or you will need to find an existing measurement instrument that is in line with the operational definitions you have developed
Chapter 9 • Collecting Data Using Quantitative Methods   221 FIGURE 9.2 ■ The Process of Developing Operational Definitions From the Abstract Variable “Friendship” Note: The Example Is Developed From the Ideas of Mikami (2010), and Gülay and Önder (2013) for the constructs of your variables of interest. It is that measurement instrument you will put into practice when actually collecting the data. In the next part of the chapter, we take a closer look at what you need to think about when developing, and putting into practice, a measurement instrument that will enable relevant data to be collected. Activity Deciding on Whether to Design Your Measurement Instrument Yourself or Adopt One Developed by Others In many cases, there are existing measurement instruments that can be used to measure specific concepts relevant for your research questions. For example, if your research question involves the variable “depression,” you will find many existing instruments for measuring aspects of depression.4 Find a measurement instrument that has been developed by others and that includes items designed to measure at least one of the variables relevant to the research you are about to design. Answer the following questions about that instrument: 1. Do the understandings and assumptions about the concepts in question match the thinking underpinning my research project? 2. Does the instrument focus on the same aspects of the concept in question as do I?
222  Research Design 3. Is it clear how the concept being measured by the instrument is operationally defined? 4. Does the instrument measure all quantifiable factors of the concept, as those factors are defined by the operational definition underpinning the instrument? If, based on the answers you gave to the above questions, you now consider adopting that instrument, or specific items from it, you will need to be able to justify that such an adoption is congruent with theoretical and disciplinary traditions in which the research you are about to design is embedded. If you are unable to find an instrument developed to measure at least one of the variables relevant to the research you are about to design or you are unable to justify adopting the instrument you have found, you know you will have to develop a measurement instrument yourself. DEVELOPING A MEASUREMENT INSTRUMENT The aim of any measurement instrument is to enable relevant and credible data to be collected that can be analyzed in a statistically reasonable way and provide you with the information you need to answer your research questions. This means that when developing the measurement instrument for your study, you will need to think about • how to develop items in that measurement instrument that measure what you intend them to measure, • whether or not responding to the measurement item(s) demands too much from the respondents and therefore may not provide accurate data, • whether the items in your measurement instrument enable consistent measurements of the quantifiable aspects they are intended to measure, • whether the form that a measurement item takes is in keeping with what that item is supposed to measure. In the discussion to follow, we take a closer look at what you will need to think about related to each of these points. Developing Measurement Items That Measure What You Intend Them to Measure As we have seen, when developing a measurement instrument to use in a specific research study, you will need to be clear about which quantifiable factors you need to include measurements of in that measurement instrument. However, even when you have done this, it is not always clear how to phrase an item that actually measures those specific quantifiable factors. You will need to word each item in your measurement instrument carefully so that you can be sure that each item measures what it is supposed to measure.
Chapter 9 • Collecting Data Using Quantitative Methods   223 How do you do this? To answer this, we will return to the example we discussed earlier5 —about making the abstract variable “friendship” measurable. In this example, quality, stability and adjustment were understood as the key elements of the variable “friendship,” and one of the quantifiable factors making up the key element quality was validation (acknowledging another person’s emotions, values, thoughts, and so on). So how do you phrase an item to measure validation? One way that you might do this is to design the following item: Please rate the degree to which you agree with the following statement: “My friend does not mind that I sometimes get angry” 1 I do not agree at all 2 3 4 I moderately agree 5 6 7 I fully agree However, what you will need to think about is if someone responds that they fully agree that their friend does not mind that they sometimes get angry (i.e., 7 on the scale), does that actually indicate that when giving this rating the person was thinking about validation, that is acknowledging their friend’s emotions, values, and thoughts? Could it be that what this item measures is not about the factor validation at all, but something else? For example, the factor care (showing kindness and concern for others)? You will have no way of knowing this. Or put another way, you will have no way of knowing if you have measured what you set out to—the factor validation. The result of not knowing if you have measured what you set out to measure is that you have no way of assessing the construct validity of the measurement instrument, that is, the degree to which a measurement instrument measures what it is supposed or expected to measure. Construct validity is central to establishing the overall validity of the research you are about to design. If items in your measurement instrument do not measure what they are supposed to measure about the variables of the study, no statistical analysis procedure using data obtained from those flawed measures will enable credible research findings about the variables of interest to be achieved. This applies even when the data produced by putting the measurement instrument into practice is of the type that allows those statistical analysis procedures to be performed. To make sure that the items intended to measure a specific variable (or aspect of that variable) actually measure that (aspect of the) variable, you will also have to think carefully about the response options you provide for answering those questions. These response options need to be in keeping with how you have operationally defined that variable. For example, to be able to measure the factor validation, it must be possible to choose response options that are commonly agreed upon by fellow experts as being relevant to the factor validation in relation to quality, where quality is understood as one of three key elements of the variable “friendship.” The response options you offer for a specific item affects whether peers and experts within the disciplinary and theoretical tradition in which your study is embedded see the
224  Research Design item as a good (or bad) measurement item for measuring the variable (or aspect of a variable) that the item is supposed to measure. Do Your Measurement Items Demand Too Much From the Respondents? Tourangeau and Bradburn (2010) suggest that several cognitive processes are initiated when a person fully engages in responding to an item in a measurement instrument such as a questionnaire. First, the item is interpreted, and then the respondent deduces its intent. Once respondents have made up their minds about what the item is about, they search their memory for relevant information and then merge that information into a single response. These cognitive processes occur in loops and evolve as each respondent engages in a type of mental conversation with the researcher, via the measurement items. At the opposite end from full cognitive engagement, you find respondents just ticking a box, or scribbling down an answer, with little or no motivation other than just finishing the measurement instrument (usually as quickly as possible). What the items are about, as well as how they are phrased, may affect whether a participant is willing, or even able, to undertake the cognitive burden of providing an accurate, honest, or truthful response to a question or item in a measurement instrument. Consequently, what a question is about, and how it is phrased, may affect the credibility of the research findings based on the data produced from people responding to that question. For example, when faced with a question that is hard to interpret, such as the question How many university-level statistics courses have you taken? the respondent is likely to be more inclined to choose the tick-a-box option and move on. This is because the respondent might not want to engage in interpreting what “taken a course” means. Does it mean courses passed? Or does it include courses taken, but failed? Do courses with no credit (such as prelecture summer courses) count? Or do they have time to actually work out how many courses they have taken if, for example, courses were taken a long time ago or they have taken many of them? Moreover, a respondent may just tick boxes to finish the measurement instrument in the shortest possible time because they are frustrated by not understanding what is being asked, or simply frustrated by the number of items they have to respond to. In such cases, you really cannot know whether those responses reflect the thoughts or actions or knowledge or attitudes of that respondent. Such non-correspondence between the information you get in the form of data and what you will conclude about your variables based on that information will threaten the validity of those conclusions. TIP PRETESTING THE MEASUREMENT INSTRUMENT TO ENABLE COLLECTING (MORE) CREDIBLE DATA To avoid the problematic situation of starting the main data collection, and then realizing that the measurement items in themselves or the measurement instrument as a whole represent measurement problems, we strongly suggest pretesting. If you have developed (parts of) your measurement instrument yourself, and that instrument has never been used to collect data before, the individual items need to be pretested for “clarity, ambiguity, and difficulty in responding to.” Moreover, the
Chapter 9 • Collecting Data Using Quantitative Methods   225 complete set of measurement items (i.e., the measurement instrument) needs to be tested for “length, and for time and difficulty to complete” (both quotes from Punch, 2003, p. 34). Even smaller scale pretesting on family and friends can provide valuable feedback to make sure your measurement items work as intended. For example, pretesting the item Select the option that best represents your opinion about the statement “The criminal justice system is performing well” 1 2 3 4 5 6 Strongly disagree Moderately disagree Mildly disagree Mildly agree Moderately agree Strongly agree may include asking a group of people (1) what “the criminal justice system” means to them, (2) what they see as characterizing a well-performing criminal justice system, as well as (3) how they arrive at a response to the item. This will help you understand the cognitive processes that the respondents undertake to arrive at a response to a specific measurement item—and such an understanding will help you write items enabling more credible data to be collected. If you are now wondering whether pretesting is necessary, consider the following hypothetical situation: Assume that you, after having received responses from 50 of the 500 people you have asked to participate in your research, realize that the item about the criminal justice system above is not clear and therefore needs to be changed to Select the option that best represents your opinion about the statement “The courts are free of improper government influence”6 1 2 3 4 5 6 Strongly disagree Moderately disagree Mildly disagree Mildly agree Moderately agree Strongly agree The problem with changing an item when data collection has already started is that you will have no way of knowing whether the data from the first 50 respondents related to this item represents the same understanding of what the item is meant to measure compared to data from respondents completing this item after you have changed it. Then you must either • discard the data from the first 50 respondents (which is potentially a serious threat to the credibility of your research, especially if you have limited resources and therefore do not have the opportunity to plan for a sample so big that +/- 50 respondents does not matter much) or • keep the data from the first 50 respondents, and risk muddling up data that represent one measure of the variable in question with data representing a different measure of that variable. This is potentially a serious threat to the credibility of your research, for reasons we have emphasized throughout this chapter so far. Consequently, while pretesting the measurement instrument will take some time, in the end it will be time well spent. Pretesting is one way to avoid, or at the very least minimize, bumps along the way when putting your study design into practice. Other things can go wrong, too. The key point is to seek help as soon as you become aware that not everything is going as planned—for example, you are not getting the data you need (type or amount) to answer your research question(s).
226  Research Design Will the Measurement Items Enable Consistent Measurements of the Quantifiable Aspects They Are Intended to Measure? If an item is phrased in such a way that the interpretation of that item varies across the group of respondents, you cannot really know whether you have measured what you intended to by including that specific item in your measurement instrument. In other words, that item does not serve as a consistent measure of the variable in question (or a quantifiable factor of that variable). Consistency of measures allows comparing like with like when statistically analyzing your data. The consistency of measures is related to the validity of the measures and therefore to the validity of the research. In textbooks about quantitative research or quantitative methods, you will see the degree of consistency across multiple measures of the same construct referred to as the reliability of the measurement. For example, you may expect that all students enrolled in a specific data science study program at a specific university will respond consistently to the question How many university-level statistics courses have you taken? This is because being enrolled in the same study program means they all have the same course plan, and therefore they all have taken the same statistics courses. However, a student who has previously taken statistics courses elsewhere will probably respond with the total number of statistics courses taken in total—not just the ones included in the course plan for the specific data science study program that the student is presently enrolled in. Moreover, some students may include courses taken, but failed, in their responses, while others include passed courses only. Therefore, the question does not serve as a consistent measure. To turn the question into a more consistent measure, you should consider rephrasing it. For example, the question How many statistics courses have you taken and passed as part of the study program you are currently enrolled in? could prove to be a more consistent measure of what you actually want to measure compared to the original version of the question (How many university-level statistics courses have you taken?). PUTTING IT INTO PRACTICE DISTINGUISHING BETWEEN RELIABILITY OF MEASURES AND VALIDITY OF MEASURES Suppose Question X and Question Y are meant to measure the same variable (or aspect of that variable). If Question X or Y or both do not serve as consistent measures of what those questions intend to measure, a respondent may produce an answer to Question X that is inconsistent with the answer given to Question Y. For example, you may expect that students enrolled at a specific university, in a specific data science study program that has a set number of statistics courses, will respond consistently to these questions: 1. How many statistics courses have you taken as part of the study program you are currently enrolled in? 2. What is your total course credit from statistics courses taken as part of the study program you are currently enrolled in? However, as pointed out earlier in this section, some students may include courses taken, but failed, in their responses to Question 1, while others may include passed courses only. Moreover, in Question 2, only courses passed will be included, as courses
Chapter 9 • Collecting Data Using Quantitative Methods   227 not passed do not give any course credit. Therefore, their answer to Question 1 will not necessarily be in line with their answer to Question 2. Consequently, the questions do not serve as consistent measures of how many semester hours a student has spent (or is supposed to have spent) on statistics. However, even if a question is phrased in such a way that all respondents do interpret it in the same way (i.e., the measure is reliable), but for some reason they do not interpret it in the way you intended, then the measurement still will not measure what it is supposed to measure—what you wanted to know about. Rather, it will measure what the respondents all interpreted the question to be about. Therefore, any measures obtained related to this reliable measurement item are not valid in relation to you being able to make claims related to your research questions based on them. Hence a measurement can be reliable but not necessarily valid. On the other hand, if the measurement is not reliable, it cannot be valid. For example, if a question is phrased in such a way that different people interpret it in different ways, you do not really know what that question measures and therefore the measurement is not valid. From all this we can conclude the following: • A measurement must be reliable to be valid. • But just because it is reliable doesn’t necessarily make it valid. Is the Form a Measurement Item Takes in Keeping With What That Item Is Supposed to Measure? For each item comprising the measurement instrument you are about to develop, you must decide what form that item will take. An item may take the form of including a set of predefined response options. These are known as closed-response items (or closed-ended questions in cases when the item is a question) because they come with a set of predefined response options that close off the range of responses that can be given to an item. Examples of such closed-response items are the following: 1. Select the option that best represents your opinion about the statement “The criminal justice system is performing well.” 1 2 3 4 5 6 Strongly disagree Moderately disagree Mildly disagree Mildly agree Moderately agree Strongly agree 2. What is your eye color? Select one of the options: 1. brown, 2. hazel, 3. blue, 4. green, 5. gray, 6. amber. Other items may take the form of an open-response item (or open-ended question in cases when the item is a question). Open-response items are phrased in such a way that the respondent is free to provide whichever response that respondent sees as appropriate. An example of an open-response item is, What is the most important issue for you that the COVID-19 pandemic has raised? Answers to this open-ended question will be in the form of text. This text will need to be analyzed, and the outcomes emerging from that analysis must be converted to and recorded as numerical data to enable statistical analysis of them.
228  Research Design To do this will usually involve some form of deductive coding, where the researcher looks specifically for examples of predefined codes (typically listed in some form of codebook) in the textual answer provided by the respondent. The thinking behind this way of coding is much different from inductive coding strategies that are most often associated with qualitative research.7 The open-ended question What is the most important issue for you that the COVID-19 pandemic has raised? could have been posed as a close-ended question by providing a set response option based on issues related to the COVID-19 pandemic you have identified from your reading of literature related to this question. For example, What is the most important issue for you that the COVID-19 pandemic has raised? Select one of the following options: 1. Ensuring vaccine supply 2. Unemployment 3. Loss of freedom 4. The need for strong leadership by governments 5. I don’t know Then you can only get information about those five possible responses. Therefore, what you will need to think about when deciding whether to use forms of open or closed items in your measurement instrument—or a combination of both—is that the choice you make about the form that each item takes will affect what type of information you are able to get by asking that question in that way. This means that thinking about the choice between open-response and closed-response items is not just a matter of preference. Rather, the thinking concerns whether the measurement instrument is “any good as a measure of the characteristic it is interpreted to assess” (Messick, 1980, p. 1012). In other words, choosing which form an item will take affects the degree to which that item measures what it is supposed to measure. Consequently, the form that an item takes affects the validity of the measure (and therefore the validity of the research and the research findings). TIP A REMINDER: HOW THE RESPONSE OPTIONS YOU OFFER RELATE TO YOUR RESEARCH BEING STATISTICALLY REASONABLE What form the response options to a question take affects whether the research is statistically reasonable (see Chapter 8). This is because the responses need to be able to produce data of a type that enables performing statistical analyses that will produce the kind of knowledge needed to answer the research question. For example, if you need to know something about the respondents’ age, you could choose to offer the options (youth, adult, senior) for an item meant to measure age. Then that item will produce ordinal data (because youths are younger than adults, and adults are younger than seniors). This data will not enable you to calculate average ages of subgroups of people, for example, subgroups defined by the geographical area
Chapter 9 • Collecting Data Using Quantitative Methods   229 in which people live. Therefore, if you need to know the average ages across subgroups to be able to answer your research question(s), you will not have the type of data that you need using these response options. Takeaways From This Section After reading this section, you should understand how and why the validity of your research is affected by how your measurement instrument is designed. This includes how each item making up that measurement instrument itself is designed. How each measurement item is designed includes the wording, content, and form of each item. To ensure the validity of your research when designing the measurement instrument that will be used in that research, and the items that comprise it, you will need to ensure that • how you develop items in your measurement instrument results in them being able to measure what you intend them to measure, • responding to an item does not demand too much from the respondent (if it does, that item may not provide accurate data), • the items in your measurement instrument enable consistent measurements of the quantifiable aspects they are intended to measure, • the form that an item takes is in keeping with what that item is supposed to measure. However, even if you have addressed all of these issues, you are only halfway there in terms of the thinking that you will need to do related to putting your measurement instrument into practice. This is because it is one thing to have developed a credible and valid measurement instrument such as a questionnaire, but it is quite another to actually put it into practice. In the next section, we take a look at what you will need to think about when deciding how you put that measurement instrument into practice—that is, actually use it to collect your data. PUTTING A MEASUREMENT INSTRUMENT INTO PRACTICE On its own, a measurement instrument will not produce any data. It is only when that instrument is put into practice as part of a data collection procedure that the actual data are produced. To explore what you will need to think about when putting your measurement instrument into practice, we use the example of developing and implementing a measurement instrument in some form of survey research. We chose this example because surveying is a frequently used method in quantitative research. In fact, survey research “is one of the pillars in social science research in the twenty-first century” (Stockemer, 2019, p. 26). The measurement instrument in a quantitative survey is usually, but not always, some sort of questionnaire made up of individual measurement items. It is when that questionnaire is distributed to people requesting their responses, and when those people actually
230  Research Design provide their answers to the items comprising that questionnaire, that the numerical data of that research study is produced. Therefore, it is important to think about how to optimally put into practice the collection of that numerical data. For example, how will you approach people to ask them to participate in your research? If they agree, in what form will they receive that questionnaire: Will it be in paper form, electronic form, over the telephone, or face-to-face? When they are filling in the questionnaire, will your items be understood by your participants, or is the language you use in your items not clear enough? Have you considered how long it might take your participants to complete that questionnaire? And throughout all of this, have you thought about the ethical issues related to what is asked and how it is asked? Remember, no matter how well designed your questionnaire is, if you do not think through these questions, you are unlikely to gain the credible data you need. In the rest of this section, we take a closer look at what you will need to think about in order to address these sorts of questions. We begin by looking at what you need to think about to optimize the response rate of your survey. Response Rates and How They Relate to Sampling and Validity If enough people don’t respond to a request for them to answer a questionnaire, then that questionnaire will not give you the numerical data you need. The proportion of sampled individuals that are willing to provide information by responding to the measurement instrument that is part of a specific survey-based research study is referred to as the response rate of that survey. Response rates matter because very low response rates are more likely to cause your final study sample, that is, the group of people who actually provide the data for the study (as opposed to the sample you originally planned for) to be incongruent with the sampling strategy you initially designed, and that was in line with your plan for arriving at statistically reasonable answers to your research question(s).8 For example, a probability sample will secure approximate representativeness of the study population, but only to a point. If the response rate is low, the final study sample may be too small to capture the variability in the variables you intend to measure. In such cases, even probability sampling will not enable research findings about the sample to underpin statistically reasonable research findings about the study population. In other words, if not enough people respond to ensure approximate representativeness of the study population, then no matter how good the design is, technically speaking, the research will not achieve the purpose it was designed for. Therefore, the easiest way to avoid problems related to nonresponse is to make sure that response rates are high across all subgroups of the study population relevant to the research in question. This means that you will need a very well-thought-through sampling strategy, able to optimize response rates, to form part of your research design. Ways of improving response rates include • providing advance letters or some other form of information to the people in your sample announcing the survey and outlining what participating in it involves, or • follow-up efforts such as reminders or multiple attempts to obtain responses to motivate a potential respondent to actually respond.
Chapter 9 • Collecting Data Using Quantitative Methods   231 Improving response rates might also include using incentives for participation. However, it is important to recognize that there are ethical considerations you will need to think through related to using incentives to motivate people to respond to a measurement instrument. Such considerations are discussed on the box below. PUTTING IT INTO PRACTICE ETHICAL CONSIDERATIONS WHEN USING INCENTIVES FOR PARTICIPATION Sometimes, a token of gratitude in the form of some sort of payment or reward is offered to potential respondents in return for participating in a study. However, this can raise ethical issues that will need to be thought through. Empirical evidence shows that prepaid monetary or nonmonetary incentives to participate, such as cash, pens, and lottery tickets, do increase response rates (Singer & Ye, 2013). However, while monetary and nonmonetary incentives can help with recruitment of respondents for a survey and also act to motivate some respondents to participate in the survey who otherwise might not (Singer & Bossarte, 2006), increasing recruitment and response rates does not outweigh the importance of thinking about the ethical issues involved in using incentives. For example, surveys on sensitive topics such as family or self-directed violence, sexual abuse, underage substance use, and weapon possession, may cause psychological trauma or otherwise place respondents at risk or cause burden in their lives (Schirmer, 2009; Singer & Bossarte, 2006). In such cases, “[i]ncentives are improper when they are used to induce participation in the presence of avoidable or unreasonable risks. What is unethical in such a situation is not the use of incentives, but the failure to protect against risk” (Singer & Bossarte, 2006, p. 416). This is an example of why, in Chapter 2 of this book, we emphasized that thinking about ethics should continue throughout the entire research process as opposed to ethics being thought about as a static list of standardized steps to be followed that can be made into a checklist and then ticked off as complete one by one. To sum up, we have established that response rates matter in terms of you being able to make statistically reasonable claims about the study population based on statistically analyzing the data you collect. Therefore, when designing your research, you will need to think through matters such as the way you will approach potential respondents and what you will expect from them in terms of, for example how much time it takes to complete the measurement instrument or the cognitive or emotional burden such responding puts on the respondent. Ways of Collecting Data When Doing Survey Research, and Why They Matter Another key consideration when putting the measurement instrument into practice, is how you will actually collect the data. For example, data collection in a survey-based research design can take various forms. The measurement instrument could be presented to a participant verbally face-to-face, or by using the telephone. Or, a written measurement instrument could be distributed by mail or digitally on email. For example, participants of the study could receive an email containing a link to an online questionnaire.
232  Research Design Thinking about, and choosing, a specific form that the data collection will take involves assessing the feasibility of each option. For example, do you have the resources in terms of time and money to do face-to-face interviewing? What will it take to get hold of the necessary information, such as email or postal addresses, to contact the people in your sample? However, it is not just feasibility considerations that you will need to think about when deciding how to collect your survey data. You will also need to consider and think through what type of items you have included in your measurement instrument, whom they are presented to—when, where, and how—and whether the way you have decided to administer the measurement instrument is appropriate in terms of you obtaining credible and reliable data. We will illustrate why considering these issues is important in terms of you being able to arrive at credible answers to your research questions by exploring three scenarios. Scenario 1 Assume your study is about students’ attitudes toward cash-in-hand payments (i.e., payments for which no tax is paid) in cases of casual employment. You have developed a measurement instrument comprising items measuring aspects of the variable “attitude toward cash-in-hand payment,” and the items are phrased using expressions such as “unofficial work,” “under-the-table payment,” “untaxed payment,” and so on. Moreover, assume your data collection procedure includes distributing paper copies of your measurement instrument to a group of financial law students during a lecture on tax revenue. You ask them to return the completed measurement instrument before leaving the auditorium. The problem here is that financial law students might be more prone to underreport positive attitudes toward cash-in-hand payment for work during a lecture on tax revenue than if they had responded to the measurement instrument in an off-campus context. This is because even though the measurement instrument is self-administered, and anonymous, the students are likely to want to “look good against the background of social norms and common values” (Stockemer, 2019, p. 41)—in this case the norms and common values associated with being a financial law student. As such, the students might think it is socially unacceptable to report truthfully about having a positive attitude toward breaking laws related to work and paying taxes. You will see this phenomenon, the tendency to depict oneself as conforming to social norms, described as social desirability discussed in many textbooks. Social desirability is one of the most common issues related to how the items comprising a measurement instrument can, when combined with a way of collecting data that does not take the nature of the item into account, affect the credibility of research (Nederhof, 1985; Stockemer, 2019). This is because, in this case, people do not answer accurately and truthfully the questions comprising your measurement instrument. Instead, they answer what they think they should answer, or what it is acceptable to answer in relation to that question. Consequently, whatever you claim to know about your study population is based on estimates of population characteristics computed from numerical data about what people thought they should answer in relation to the variables of the study. These answers may, and probably will, not correspond well to what the actual situation is that you claim to have learned something about. For example, you may find that the financial law students in your sample have high levels of negative attitudes toward all factors associated with cash-in-hand payments in cases of casual employment, yet in reality a number of them are actually taking such payments to support their studies, and believe this is justified.
Chapter 9 • Collecting Data Using Quantitative Methods   233 Scenario 2 Suppose you have decided that living in a residential aged care facility is an inclusion criterion for the study population of your study. You design a strategy for selecting who will be the participants of your study, and make sure that the sample represents the study population well.9 Moreover, because an online survey is both cost and time efficient, you choose to distribute an online self-administered questionnaire by way of sending a link by email. The older people receiving the link will then access the questionnaire by clicking that link. When a respondent has completed the questionnaire, their response will then be sent back to you in digital form. In this way, the data is automatically ready to be fed into software designed to perform statistical analyses. All good so far. However, many elderly people living in a residential aged care facility may not have an email address. And even if they do have an email address to which you can send the link they are supposed to use, to access the questionnaire they would need access to a computer, as well as the knowledge and health required to operate that computer. While some of the elderly people in the residential aged care facility may be able to do this, many may not. In effect, those who cannot do this will not be able to participate in the research even if they wanted to. Consequently, only a fraction of the people in your study sample will have the opportunity to reply to the questions comprising the questionnaire. In other words, the response rate (i.e., the proportion of sampled individuals that actually provide information by responding to the questionnaire) is low. As we pointed out at the outset of this section about putting your measurement instrument into practice, low response rates may pose a serious threat to the validity of your research. Scenario 3 Suppose you decide to collect data by way of asking people questions face-to-face, or by telephone. In each case, there will be some form of interaction between you and the respondent. Consequently, the respondent has the opportunity, for example, to ask you to clarify aspects of the questions being asked if needed. Such interaction will help avoid questions being interpreted differently across the group of respondents. This will strengthen the validity of your research. On the other hand, the personal interaction between you and the respondent may make the respondent feel uneasy about providing answers to certain types of questions, for example, if your questionnaire includes questions that are perceived as judgmental or questioning the respondent’s competence, questions about sexuality, questions about abuse, or questions about political preferences. These are typical examples of questions that may make participants prone to opt out of providing an answer when face-to-face or telephone interviews are used simply because of the lack of possibility for the respondent to remain completely anonymous. The more respondents who choose to opt out of providing answers to your questions, the lower the response rate. As we have already discussed, low response rates represent a possible weakness in the validity of your research. Therefore, when planning to ask people questions that may make people feel uneasy in some way, distributing the questionnaire by mail or email may be a better option than choosing the face-to-face (or telephone) option. This is because you will then be absent at the time the respondents provide their responses to the questions making up that questionnaire, thereby maintaining their anonymity when responding. However, the trade-off will be that when you are absent, “assistance cannot be provided, and there is nobody who can clarify unclear questions or words” (Stockemer, 2019,
234  Research Design p. 65) should the respondent need it. You will need to make choices about this trade-off in terms of what will matter more in terms of the credibility of the data that you collect. TIP PRETESTING THE ENTIRE DATA COLLECTION PROCEDURE As highlighted previously in this chapter, pretesting the individual items that make up the measurement instrument is part of ensuring the credibility of the research. Pretesting also provides an opportunity to test the way you plan to put that measurement instrument into practice (i.e., test your plan for approaching people, how you plan to actually collect the data, test any effect the use of incentives has on who decides to participate in your study, and so on). Of course, such pretesting requires that you do make up a test sample comprising people that could just as well have been included in the “real” sample of your study, approach them, and collect data from them, in the same way that you are planning to approach and collect data from the people in the “real” sample. You do this to test whether your plan for collecting data works as intended in your research design. If you are a student designing a smaller scale study with a very limited time frame, the extent of pretesting that you are able to achieve may be limited. This is a feasibility consideration. However, any pretesting of the data collection process that you are able to manage “is likely to help increase response rates” (Punch, 2003, p. 34), and therefore may potentially strengthen the credibility of your research. Take-Home Messages From These Scenarios The above scenarios illustrate that choosing how to go about putting some sort of measurement instrument into practice, and actually collect the data for your study, is not as straightforward as it might seem. You will need to think through the effects of each decision you make related to how you will put that instrument into practice and justify them when you are designing your research. This is important, as decisions about how you will go about to collect the data of your study may affect the credibility or validity of the answers to your research questions. TIP TIPS FOR WHAT YOU NEED TO THINK ABOUT IF YOU ARE CONSIDERING USING DATA COLLECTED BY OTHERS Sometimes research is designed involving the reuse of existing data already collected by others. While reusing data collected by others might be a tempting option in that the data is already there waiting for you, reusing data comes with its own challenges. Therefore, there is still much to think about if you choose to do this. This is because when reusing data, it is important to remember that the researcher who collected the data in the first place made a series of choices—the sum of which led to those specific data being collected. If you use that data set in your own research study, you will not only be using the numerical data comprising the dataset. Rather, you will also be using the understandings of the variables which the data set is about, and the way that those understandings were reflected in a measurement instrument and the way that this measurement instrument was put into practice. For example, when the measurement instrument used to collect a set of data takes the form of a questionnaire, the researcher designing that questionnaire has actively
Chapter 9 • Collecting Data Using Quantitative Methods   235 chosen which questions to ask, which not to ask, and how the questions are phrased. Equally, an observation schedule is the outcome of choices made by the researcher about what to look for, and record, when observing. When using measurement instruments such as web scrapers or technical devices, the researcher has chosen which data the scraper will extract or what type of information the technical measurement instruments will record. Consequently, a dataset is not a neutral, value-free “thing” that you can simply “pick up” and insert into your study. Should you decide to reuse data collected by somebody else, you adopt their operational definitions of the variables in question, as well as the measurement instrument they developed based on those operational definitions. Therefore, any weaknesses in the operational definitions developed by those who collected the data will also be weaknesses in your research. So will any discrepancies between the issues the research is designed to address, the measurement instrument used to measure the variables of interest to those issues, and the way that the researcher went about to collect the data using that measurement instrument. What this means is that if you are intending to use an existing data set in your research design, you will need to be able to describe the operational definitions that underpin the items making up the measurement instrument that was used to produce the data. Moreover, it must be possible for you to acquire knowledge about how that measurement instrument was put into practice. Only then will you be able to assess the credibility of using that data in your study. CONCLUSIONS This conclusions section is in two parts. The first part is focused on conclusions specific to the discussion in this chapter. The second part is wider in focus and is designed to put together, and round off, the discussion that we have had about quantitative design related considerations across both this chapter and the one before it (Chapter 8). This second part is important as we have covered a lot of interconnected ground in these two chapters about quantitative research design (Chapters 8 and 9) and therefore neither chapter can be read in isolation. Consequently, the second part of the conclusion is about connecting the dots between each of the chapters. Part 1: Conclusions Related to This Chapter This chapter has highlighted that putting into practice the collecting of the numerical data that you will then base your statistical analyses on involves a series of interrelated decisions. To arrive at credible research findings by way of statistically analyzing numerical data, that numerical data needs to represent relevant and credible information able to be used to answer the research questions you are interested in. This involves identifying variables of interest related to those research questions and making those variables measurable. It is the measurements of these variables that provides the numerical data you will apply statistically based analyses to, the interpretation of which will provide you with the findings of your study. Once you have decided what you will need to measure in order to answer your research questions, you will then need to develop some sort of instrument able to take those measurements. In quantitative survey research this instrument (usually a questionnaire) is made up of a set of measurement items related to aspects of the variables of interest. The way each item in the measurement instrument is worded is an
236  Research Design important part of the validity of your research, as is the way that the measurement instrument is administered. Therefore, a key message in this chapter has been that making variables measurable, developing items that will measure them, and then finding a way to put the measurement into practice to actually collect the data requires a lot of careful thinking about a series of interconnected aspects of both the design, and conduct, of the research. It is central to the validity of the research. This is because if, for example, the items don’t measure what they are supposed to, or the response rate is too low, then this will affect the validity of the research design and the findings that arise from it. Therefore, it is important to remember that simply obtaining numerical data about a study population, or a sample drawn from a study population, does not ensure credible research findings. What does make such findings credible is when the numerical data represent relevant and credible information about whatever it is the research is being designed and conducted to address. Part 2: Putting It All Together—What Have We Learned From Chapters 8 and 9? We have covered a lot of ground in the two chapters about quantitative research design (Chapters 8 and 9). Much of this ground has been related to issues around ensuring that your research design is statistically reasonable. Our discussion has highlighted that using statistical procedures to analyze your data does not in themselves make your research findings statistically reasonable. Used well, procedure(s) for statistically analyzing the research data can increase the credibility of your research. However, used poorly, or in an unthinking manner, they can reduce or even undermine that credibility. Therefore, in Chapter 8 we emphasized • what you need to think about to be able to decide what procedure(s) for statistically analyzing the research data will enable credible answers to your research questions to be deduced from the outcomes of that analysis, and • what you need to think about to be able to make statistically reasonable claims about a large group of people if your plan is to collect data from a relatively small subset of that group—that is, a sample of that large group of people. In addition, we have discussed the following design related matters focusing on enabling the collection of relevant and credible data in this chapter, Chapter 9: • what you need to think about to measure the variables of the study to make sure that the data represents relevant information about the variables addressed by the research question(s); • what you need to think about to design a measurement instrument that, when put into practice, enables collecting relevant and reliable data; • what you need to think about when putting the measurement instrument into practice and collecting data to make sure that the data produced are credible. This includes (a) the type of measurement instrument you will use and why, (b) what variables that instrument will measure and why, and (c) the way you will approach the people who will provide the measurements and why.
Chapter 9 • Collecting Data Using Quantitative Methods   237 What we hope to have made clear across these two chapters is that all of the above points need to be taken into account in order to arrive at credible research findings when using some form of quantitative approach. In a nutshell, when developing your research design, you will need to • identify and develop precise research questions, • select statistically reasonable analysis procedures that will enable you to obtain the information that you will need to know to answer those questions, • design a sampling strategy and a measurement instrument when planning for how you will obtain the numerical data you need for those analysis procedures, • and then put that measurement instrument into practice and actually collect the data. However, you are likely to find that what happens when you put your research design into practice does not always go exactly according to your plan. For example, you may not be able to practice probability sampling as you planned to do, because you find that you do not have access to every member of the study population (and therefore, some members of the study population are in fact excluded from participating in your study). Or you may not be able to obtain a sample large enough to obtain enough data to fully meet the requirements of the procedure(s) for statistically analyzing those data that you need to use to be able to answer your research question(s). Part of responsible research and research design is “showing your warts” (Gernsbacher, 2018, p. 403) about what you actually did, and acknowledge ways in which the compromises you have made affect how the findings of the research can be or might be or are interpreted. When “showing your warts” (Gernsbacher, 2018, p. 403) about what you actually did, you acknowledge any detours taken from the path planned for when putting the research design into action, as well as enable yourself and others to assess whether those detours matter in terms of what, and about whom, you will be able to say something (related to your research questions) at the end of the research. In the next chapter, we continue our exploration of putting research methods into practice when designing research by taking a look at issues related to designing and conducting research using more than one method. SUMMARY OF KEY POINTS • When employing quantitative approaches in your research design, you will identify your research questions, select statistically reasonable analysis procedures based on what you will need to know about to answer those questions, design a sampling strategy and a measurement instrument for obtaining the numerical data you need for those analysis procedures, and then put that measurement instrument into practice and actually collect the data. • The degree of consistency across these aspects of the overall research design affects whether the data obtained are likely to carry credible information about the variables in question.
238  Research Design • Therefore, the form that a data collection procedure takes must be congruent with the plan for statistically analyzing the data produced by them, as well as congruent with the group of people from whom you want to collect data. • An important consideration when designing your research is to make sure that how you collect your data enables you to obtain relevant and credible pieces of numerical information able to underpin credible answers to your research questions. • Research questions are often about abstract and complex concepts. Those concepts are what make up the variables of interest in your study. • Obtaining numerical data representing credible information about the issues of interest requires making the often complex and abstract variables identified as relevant for addressing the research question(s) measurable. • When you decide upon a specific way of understanding a variable of interest, you are developing what is called the construct for that variable that you will use in your study. • You will use some form of measurement tool or instrument to obtain numerical data about those variables in a reliable and valid way. • The individual measurement items you develop related to the variables in your study collectively make up the measurement instrument to be used in your study. • If your measurement instrument does not measure what it is supposed to measure about the variables of the study, no statistical analysis procedure will, in itself, enable valid research findings about the variables of interest to be achieved. • A fundamental issue you will have to think through is whether you will design your measurement instrument from scratch or adopt one developed by others and the implications of doing so. • When using your chosen measurement instrument, you will need to ensure optimal context specific ways of collecting data from your population of interest in order to maximise the response rate for your study, and therefore the credibility of any findings based on those responses. • For each item comprising the measurement instrument, you must decide what form that item will take. For example, closed- or open-response items. • The validity of your research is affected by how your measurement instrument is designed. This includes how each item making up that measurement instrument itself is designed. How each measurement item is designed includes the wording, content, and form of each item. • Validity is a complex concept, because the term validity focuses on different things depending on what we are talking about when using it. • Types of validity include statistical validity, external validity, construct validity, and content validity. • A research design, or a research finding, is valid when it is credible, well founded, reasonable, justifiable, and defensible.
Chapter 9 • Collecting Data Using Quantitative Methods   239 KEY RESEARCH-RELATED TERMS INTRODUCED IN THIS CHAPTER construct (of a variable) construct validity content validity measurement instrument measurement item operational definition of a variable quantitative survey reliability response rate social desirability validity SUPPLEMENTAL ACTIVITIES 1. In light of the discussion in this chapter, we would like you to return to these questions we asked you to think about, and write answers to, after you had read Chapter 5: • • • • Why have I chosen a qualitative or quantitative approach for this study? What type of qualitative or quantitative approach am I using and why? What method(s) will I use to collect the qualitative or quantitative data and why? Once I have decided that, what else will I need to decide in order to be able to put those methods into practice? The discussion in this chapter has identified additional things you will need to think about when answering these questions when using a quantitative approach in your research design. If your research design uses a quantitative approach, fold back on your thinking about these questions or go back to what you wrote. Now extend and develop that thinking or writing in light of the discussions in this chapter about how to • • • • select participants for your research, obtain data from them, operationalize your variables so that you can measure those variables of interest, choose which statistical analysis procedures to use in your research. At the end of this reflexive process, you will have developed a large part of the methodology and methods sections of your research proposal in terms of what you will do, how, and why. 2. Imagine you have designed your research using some sort of quantitative approach. You take this design to your supervisor or your fellow students to get their feedback about what you have done. During the discussion they ask you a number of questions that require you to justify the decisions that you have made. How would you answer questions related to the appropriateness of a. b. the operational definitions of the variables of the study you have decided to use as part of your research design? the data collection procedure you are planning to use? What features of that data collection procedure would you identify as contributing to the rigor of your design?
240  Research Design c. d. e. who you will include in your study—who they are and how many they are? how you will actually collect the numerical data from people in your study? How will this ensure you have optimized the chance of having a high response rate? the measurement instrument you have designed or decided to adopt? FURTHER READINGS Gorard, S. (2003). Quantitative methods in social science: The role of numbers made easy. Continuum. Nardi, P. M. (2018). Doing survey research: A guide to quantitative methods (4th ed.). Routledge. Stockemer, D. (2019). Quantitative methods for the social sciences: A practical introduction with examples in SPSS and Stata. Springer Nature. NOTES 1. Participant here means a person from whom data is collected. You will also see participant used in qualitative research with a slightly different emphasis, namely as a co-participant with the researcher in the research. 2. “A web scraper is a specialized tool designed to accurately and quickly extract data from a web page” (scrapinghub, 2020). See also the discussion in Chapter 2 about the OKCupid case. 3. See https://en.wikipedia.org/wiki/Eye_color 4. One example is the Beck Depression Inventory-II (Beck et al., 1996) used to screen for depression, and to measure severity and behavioral manifestations of depression; a second example is the Hamilton Depression Rating Scale (Hamilton, 1960) for individuals already diagnosed as suffering from depression, used to measure symptoms of depression in individuals before, during, and after treatment. They are two among many that are seen as relevant to the treatment of depression (American Psychological Association, 2019). 5. See the section above, How to make abstract variables measurable. 6. We have discussed this example in more detail in the opening section of this chapter. 7. See Chapter 7. 8. See Chapter 8. 9. See the discussion of representative samples in Chapter 8.
10 DESIGNING RESEARCH USING MIXED METHODS PURPOSES AND GOALS OF THE CHAPTER This chapter focuses on the type of reflexive thinking that underpins the development of a mixed methods1 research design. A key question that underpins the discussion is Why, and how, might using mixed methods thinking when designing research enable you to better answer your research problem or question(s) and contribute new knowledge to your area of interest? Or put another way, how can we design mixed methods research in such a way that Greene’s inspirational vision for mixing methods might be realized? This is a vision where [t]he core meaning of mixing methods in social inquiry is to invite multiple mental models2 into the same inquiry space for purposes of respectful conversation, dialogue, and learning from the other, toward a collective generation of better understandings of the phenomena being studied (Greene, 2007, p. 13, footnote not in original). Therefore, as you are reading the chapter, we encourage you to think about the following: • What does “mixed methods” actually mean? • Why use mixed methods in the first place? • What do I need to think about in order to decide what I will mix, how, and why when I design and then conduct my research? • How can I navigate the diversity of thought that makes up the heterogenous mixed methods field? Asking these questions of yourself, and of your design, is an important part of developing a credible mixed methods design that can address the research problem that it is designed to. The discussion in the chapter provides a guide, as well as tips, for how you might reflexively think your way through questions such as these. Consequently, in the discussion to follow, the focus is not on providing a quick and simplified overview of “how to do” mixed methods. There has been a lot of writing in that genre already. Rather, in this chapter the focus is on what you will need to think about when choosing, and then using, a mixed methods research approach in your research design. 241
242  Research Design The goals of this chapter are to • Explore different views about what mixed methods research is. • Demonstrate how the view held by a researcher about what mixed methods research is impacts on the way that they design their mixed methods study. • Highlight some of the reasons that you might choose to use a mixed methods approach when designing your research. • Establish what is meant by priority, timing, and mixing when designing mixed methods research. • Illustrate types of basic mixed methods designs that reflect different decisions related to the priority and timing of the components that make up that design. • Introduce accepted notations for describing different types of mixed methods designs. • Emphasize that mixing is an analytical concept that permeates the entire process of designing mixed methods research. It is not simply a procedure or point in that design. • Highlight that matters related to mixing are first and foremost matters of thinking about what will be mixed and why, after which considerations and decisions about how that mixing might occur can be made. • Demonstrate that there are a number of possible levels at, and ways in which, mixing can be thought about or occur. • Emphasize that what levels mixing occurs at, and what associated strategies are used for such mixing at those levels, in any specific mixed methods study, depends on the purpose for the mixed methods study in the first place. • Provide concrete strategies for navigating the complex and contested field of mixed methods research. WHAT IS A MIXED METHODS RESEARCH APPROACH? All understandings, and therefore definitions, of mixed methods research approaches have in common that “something” related to two (or more) different research methods or approaches is being “mixed” in some way at a “point or some point(s)” in a research study to add “something more” to the study than using only one method would. However, after this there are various understandings, and different emphases, about what the “something” being mixed is, what aspect of that “something” is “being mixed,” at what point in a study that mixing occurs, as well as what the “something more” actually is that you will gain by that mixing. For example, you will find definitions of mixed methods that emphasize mixed methods as a single study combining qualitative and quantitative research approaches. However, you will also find definitions of mixed methods research that put the emphasis
Chapter 10 • Designing Research Using Mixed Methods   243 on mixed methods as a single study where one of the methods is incomplete and cannot stand alone. In this view, the single study does not necessarily have to comprise qualitative and quantitative approaches in it. In the next section, we take a closer look at these different emphases in the way mixed methods is defined. To do so, we use brief snapshots of three mixed methods scholars’ work. These scholars differ in what they emphasize when thinking about what mixed methods is, and is for. Mixed Methods Research as Combining Qualitative and Quantitative Research Approaches The most common understanding of mixed methods is that it is a research approach that uses and “mixes” in some way aspects of both qualitative and quantitative thinking and/or methods and/or data in the same study. An often-cited definition of mixed methods research developed by Johnson et al. (2007) reflects this understanding: “Mixed methods research is the type of research in which a researcher or team of researchers combines elements of qualitative and quantitative research approaches (e.g., use of qualitative and quantitative viewpoints, data collection, analysis, inference techniques) for the broad purposes of breadth and depth of understanding and corroboration” (p. 123).3 However, there are differences among mixed methods researchers in terms of what they emphasize when designing research to combine in some way elements of qualitative and quantitative approaches. Snapshot 1: Mixed Methods as a Method That Combines Qualitative and Quantitative Approaches John Creswell, an influential scholar in mixed methods research, makes it clear that when thinking about designing mixed methods research, his emphasis and understanding of mixed methods is as a method (Creswell, 2015). Understanding mixed methods as a method puts emphasis on developing detailed sets of procedures for the collection and analysis of different types of data—quantitative and qualitative. For example, how that data is collected, what form it must take, and the procedures that should be applied to that data once it has been collected. Such elaboration and development of procedures has resulted in typologies or categories of types of mixed methods designs. It is then possible for researchers to choose one of these types of core mixed methods research designs and adapt it to meet the purpose and address the questions underpinning the study (Creswell & Plano Clark, 2018). This emphasis on methods, and procedures related to the data collected by those methods, is reflected in the core characteristics of mixed methods that he and Plano Clark (2018) identify, and state, “[A]dequately describe mixed methods research” (p. 5). These characteristics are that mixed methods research • collects and analyzes both qualitative and quantitative data rigorously in response to research questions and hypotheses, • integrates (or mixes or combines) the two forms of data and their results,
244  Research Design • organizes these procedures into specific research designs that provide the logic and procedures for conducting the study, and • frames these procedures within theory and philosophy. (Creswell & Plano Clark, 2018, p. 5) Snapshot 2: Mixed Methods as a Way of Thinking That Combines Aspects of Qualitative and Quantitative Thought Jennifer Greene, another prominent mixed methods scholar, agrees that mixed methods studies comprise qualitative and quantitative approaches. However, she does not put the emphasis on methods and procedures in her understanding of what mixed methods research is. Instead, she understands, and therefore places emphasis on, mixed methods as a way of thinking. Reflecting the complexity, and at times messiness, of social contexts, she emphasizes the importance in social inquiry of “engaging with difference” (Greene, 2002, p. 23). Such engagement applies to both the differences in what is being studied (i.e., the complexity or messiness of a social setting or phenomena of interest) and the differences in the ways that it can be studied (i.e., different research methods, methodologies, or approaches). This way of thinking about what mixed methods is, and is for, enables “possibilities of including not just more than one method in an inquiry study but also the possibilities of including more than one methodology, more than one philosophical paradigm, as well as more than one discipline and substantive theory” (Greene, 2015, p. 607). It is a thinking driven by a desire to better capture the multidimensional reality that we study. Consequently, she views designing mixed methods research as a reflexive, thinking-driven process. This is a process that enables multiple ways of thinking to enter into the same inquiry space for purposes of respectful conversation, dialogue, and learning. It involves creative (not set or prescribed) ways of thinking. Thus, mixed methods research is much more than a type of method or procedure. Rather, it is a whole of study approach and a way of thinking that impacts all parts of the research design including what will be studied, how, and why (Greene, 2015). A Different Definition and View of What Mixed Methods Research Is Some mixed methods scholars do not agree that a study must be comprised of at least one qualitative and one quantitative component in order to qualify as mixed methods research. Rather, they put the emphasis on mixed methods research and its design as a single study. Therefore, they do not necessarily agree that the single study must have both a qualitative and quantitative component in it in order to qualify as a mixed methods design. Snapshot 3: Mixed Methods As a Single Study, Where One of the Methods Is Incomplete and Cannot Stand Alone Foundational scholar in mixed methods research, Janice Morse, views mixed methods research as a single study—hence her use of the term mixed-method rather than mixed methods. This mixed-method single study always has a core component and one or more
Chapter 10 • Designing Research Using Mixed Methods   245 supplemental component(s). The core component is the primary method in the study which “must be conducted to a standard of rigor, such that, if all else were to fail, it could be published alone” (Morse & Niehaus, 2009, p. 23). In other words, the core component is a complete method in its own right. In contrast, the supplemental component(s) in a mixed-method study only makes sense in terms of supplementing the core component in some way. This is because the supplemental component(s) is in some way incomplete and cannot stand alone as a complete study in its own right. Using one or more supplemental components is what can enable you to address a research problem or question in a fuller or more complete way. This is because the results of the core component are enhanced in some way by the results of the supplementary strategy(ies). In this view, thinking about mixed methods is not restricted to only combinations of qualitative and quantitative components (see, for example, Morse & Niehaus, 2009; Morse, 2017; Morse & Cheek, 2014, 2015; Cheek & Morse, 2022). A mixed methods study can comprise two or more qualitative methods or strategies or two or more quantitative methods or strategies. TIP BE AWARE: DIFFERENT WAYS OF UNDERSTANDING WHAT MIXED METHODS RESEARCH IS IMPACTS UNDERSTANDINGS OF WHAT MULTIPLE METHODS RESEARCH DESIGN IS There is often confusion about the difference between mixed methods and multiple methods (also known as multimethod research). Different understandings of mixed methods research can lead to different understandings of what multiple methods research is. For example, • If one of the defining features of mixed methods (and therefore mixed methods design) is the use of some form of qualitative and quantitative methods, then a study only using more than one quantitative method, or a study only using more than one qualitative method, cannot by definition be a mixed methods study. Studies that use two qualitative methods or two quantitative methods would be considered multiple methods (Creswell, 2015). 4 • If, as in Morse’s thinking, the defining feature of mixed-methods research is the combining of one complete method and one incomplete method, then the use of two qualitative methods, or two quantitative methods in the same study, can meet her definition of mixed-method research as long as one of those methods is incomplete. However, designs comprising one complete quantitative and one complete qualitative method would not meet her definition of mixed-method research but be considered multiple methods (Morse, 2017).5 This highlights that the way a researcher thinks about what mixed methods research is affects not only the way that they think about aspects of a mixed methods research design when developing it. It also affects the way that they think about research designs using multiple methods. It is also another example of why there is a lot of thinking to do when designing research in this area as the terms are not stable and even the same term can mean quite different things depending on the point of view of the researcher.
246  Research Design What Can We Learn From These Snapshots About What Mixed Methods Is? These snapshots highlight (and remember these are only three of many possible snapshots of scholars’ work that we could have used) that different mixed methods scholars, while agreeing on some things, put different emphases on aspects of mixed methods research. Consequently, there is not full agreement among scholars about what mixed methods is. Therefore, when you are reading reports of mixed methods research studies, or a textbook about mixed methods research, you will need to think about the implicit understanding of mixed methods research that report or that textbook is premised on. This will enable you to locate that textbook or report in a specific way of thinking about, and understanding of, mixed methods research. It will also help you navigate what at times can seem to be a bewildering, and frankly overwhelming, array of terminology built up in this area. There has been a lot of terminology related to the field of mixed methods research developed in a relatively short time by researchers working in many different disciplines. This has sometimes resulted in new terms for the same idea being developed in parallel—mostly unwittingly but not always.6 In such cases, we have ended up with more than one term for the same idea or a (very) slightly different idea. And a situation in which “terminology, structure and even the principles—the rules—for what is good mixed-method research, have become complicated, contradictory and contested” (Morse et al., 2018, p. 564; Johnson et al., 2007). This situation has been compounded by the fact that some scholars may have changed or modified the terms they use as their ideas, or the ideas in the field of mixed methods research itself, developed, changed, or agreed upon terminology emerged.7 Of course, in itself this is not necessarily a bad thing as it reflects the emergence of a field in development. However, what is not helpful is if the researcher does not provide insights into what changes, and why, they have made over time to their mixed methods research thinking and therefore terminology that they use. Much confusion can be created for the reader new (or even not so new) to the field when no reason for, or even acknowledgment of, those changes is given. The take home message from all this is that when designing research using mixed methods research, you will need to think through, make a decision about, and then declare the understanding of mixed methods research that you will use to provide the methodological and theoretical framework for that design. The development of the various areas that together make up that design, as well as the terms used to describe those areas, will need to be consistent with that declared understanding. Otherwise, you may find that your study becomes a muddle very quickly both at a conceptual level but also at the level of the terminology you are using. TIP SOMETHING ELSE TO THINK ABOUT: IS MIXED METHODS REALLY A NEW FIELD? The fact that mixed methods research is a contested field still in development does not mean that it is a new field (see Maxwell et al., 2015; Fetters, 2016). As Pelto notes “[S]ome anthropologists and sociologists (and others) have used mixed methods in
Chapter 10 • Designing Research Using Mixed Methods   247 field work for the past 80 years, and there are studies from early in the twentieth century that clearly fall within the definition of ‘mixed methods’” (Pelto, 2015, p. 734). Given this, he expresses surprise at some writers suggesting that mixed methods research per se is a relatively recent development.8 However, what certainly is a relatively recent development is the exponential growth of interest in the area of mixed methods research. This has been accompanied by an ever-increasing array of notations, terminologies, procedures, and types of research designs related to mixed methods. This is reflected in the rapid proliferation of journals, books, textbooks, book chapters, journal articles, methods manuals, workshops, conferences, and conference papers related to mixed methods. This has led to claims that “mixed methods research has evolved to the point where it is a separate methodological orientation with its own worldview, vocabulary, and techniques” (Tashakkori & Teddlie, 2003, p. x). WHY USE A MIXED METHODS RESEARCH DESIGN? Mixed methods approaches are designed to provide in some way a better understanding of what you are studying than only using a single method can. This is because “any given approach to social inquiry is inevitably partial” (Greene, 2008, p. 20). Therefore, using more than one research approach and way of thinking in a research design can • provide more comprehensive and multidimensional understandings of complex social settings and problems. • allow for the use of different approaches to research, and the data those approaches produce, to be mixed in some way to enable those comprehensive and multidimensional understandings to emerge. Given this, it follows that if you choose to use a mixed methods approach in your study, then you have decided that mixing different types of information, perspectives, or data about the particular issue or problem that is the focus of your study will in some way assist you to address that issue or problem. This means that very early in the research design process you will need to think through, and say something explicitly about, why mixing methods in your study design will give you “more” (e.g., a better, fuller, or deeper answer to your research questions) than using a single method. Morse, Cheek, and Clark identify some of the reasons for using a mixed-method9 research design. These include the following: 1. Sequentially increasing the understanding of a phenomenon 2. Obtaining different perspectives on the same phenomenon 3. The phenomenon demands different approaches 4. Increasing complexity 5. Increasing scope; determining boundaries (Morse et al., 2018, p. 565) Points 1 through 3 are usually planned at the outset of the project whereas Points 4 and 5 reflect that the study design may evolve as the research unfolds. For example, the results of the study indicate that the problem you have set out to study may be more complex than
248  Research Design you first thought, or the boundaries for what is to be studied are “more expansive than first understood” (Morse et al., 2018, p. 567). All five reasons are related in some way to providing a “better” answer to a specific problem by incorporating different perspectives and methods to produce a more complete picture or understanding of what is going on. For example, why people answered a survey in the way that they did. In this case you might, for example, conduct a quantitative survey about the area of interest of your research and then interview people about particular aspects of those survey findings in order to help explain them. TIP FOUNDATIONAL AND IMPORTANT WORK ON THE IDEA OF PURPOSE IN RELATION TO MIXED METHODS RESEARCH In 1989, Greene and her fellow researchers Caracelli and Graham developed one of the first typologies of purposes of mixed methods. This typology classified types of mixed methods studies by purpose. It identified five categories of purpose—triangulation, complementarity, development, initiation, and expansion: TRIANGULATION seeks convergence, corroboration, correspondence of results from different methods. COMPLEMENTARITY seeks elaboration, enhancement, illustration, clarification of the results from one method with the results from the other method. DEVELOPMENT seeks to use the results from one method to help develop or inform the other method, where development is broadly construed to include sampling and implementation, as well as measurement decisions. INITIATION seeks the discovery of paradox and contradiction, new perspectives of frameworks, the recasting of questions or results from one method with questions or results from the other method. EXPANSION seeks to extend the breadth and range of inquiry by using different methods for different inquiry components. (Greene et al., 1989, p. 259) Since then, others have picked up this work and developed it (e.g., Bryman, 2006). Therefore, you will see some variation in typologies of purpose for mixed methods research when reading in this area. The Importance of Thinking About Why You Might Use a Mixed Methods Approach What is absolutely crucial to think about from the outset of the design process is why using mixed methods in some way will strengthen your research. Why will it provide enhanced understandings of the problem or issue that your research is focused on and being designed to address? Decisions about how methods might be mixed and why, must be thought through and justified in relation to a specific piece of research. Just having more or different types of data or methods does not necessarily, and in and of itself, give a better, fuller, or deeper answer to your research problem or questions. Therefore, whether having more, or different types of, data or methods will give a better, fuller, or deeper answer to your research question depends. It depends on what the purpose is of mixing one or more methods in some way, and how that purpose assists in
Chapter 10 • Designing Research Using Mixed Methods   249 achieving the overall goal (purpose) of the research. An important part of mixed methods research design is being able to justify how the design that you come up with (including your choice of which methods will be mixed, when, and how) in that research will enable you to achieve the overall goal of the research. We develop this point in the next section of the chapter where we take a closer look at the decisions you will need to make about the priority and timing of the components that make up your mixed methods study. PRIORITY AND TIMING OF THE COMPONENTS IN A MIXED METHODS STUDY There is broad agreement among mixed methods scholars that when designing mixed methods research, consideration needs to be given to three key, and interconnected, dimensions related to the components that make up the mixed methods study. You will often see these dimensions, and the thinking about them that produced the mixed methods design, referred to as considerations about the “priority, timing, and mixing” (Creamer, 2018, p. 61) of the components.10 Thinking About Matters Related to Priority or Weighting of Components A notation system developed in 1991 by Morse, and subsequently developed by others, is commonly used when describing or writing about mixed methods and mixed methods research designs. It uses the notations QUAL or qual to refer to qualitative components in that design that draw on qualitatively derived logics of inquiry. The notations QUAN or quan are used to refer to quantitative components that draw on quantitatively derived logics of inquiry. The capitalized notation QUAL means that the overall study is a qualitatively driven one in terms of its overriding methodological or philosophical emphasis. Likewise, the notation QUAN indicates that the overall study is a quantitatively driven one in terms of its overriding methodological or philosophical emphasis. Therefore, QUAN is used when “the project is deductive, driven by an a priori theoretical framework” and QUAL when the project is “driven by an inductive process and the theory developed qualitatively” (both quotes from Morse, 1991, p. 121). In other words, capitalization reflects the priority or weighting of the components in the study in relation to the overall logic of inquiry that is shaping the mixed methods study. Morse’s use of the noncapitalized and italicized qual or quan notation indicates that that component is a supplementary one. Lesser priority or weighting is given to this component in the context of the overall design. When conducting the supplementary component, you adhere to the logic of inquiry underpinning that component. As Morse explains, • “With QUAL-quan designs, you are thinking inductively overall and when doing the core component, but you are thinking deductively when conducting the supplemental component” • “With QUAN-qual designs, you are thinking overall deductively to answer your research aim and in the core component, but when you are working in the supplemental component, you are thinking inductively” (both quotes from Morse, 2017, p. 5)
250  Research Design Creswell and Plano Clark (2011) identify three options when designing mixed methods research related to the priority given to the various components: • “The two methods may have an equal priority so that both play an equally important role in addressing the research problem” (Creswell & Plano Clark, 2011, p. 65) Our Comment: In this case, the notation used would be QUAL and QUAN to reflect different logics of inquiry, both of which are equally weighted in the overall study. • “The study may utilize a quantitative priority where a greater emphasis is placed on the quantitative methods and the qualitative methods are used in a secondary role” (Creswell & Plano Clark, 2011, p. 65). Our Comment: In this case, QUAN would be capitalized and qual not, reflecting the greater relative weighting of, and priority given to, quantitatively and deductively derived logics of inquiry in the overall study design. • “The study may utilize a qualitative priority where a greater emphasis is placed on the qualitative methods and the quantitative methods are used in a secondary role” (Creswell & Plano Clark, 2011, p. 65). Our Comment: In this case, QUAL would be capitalized and quan not, reflecting the greater relative weighting of, and priority given to qualitatively and inductively derived logics of inquiry in the overall study design. However, if we adopt Morse’s (2017) understanding of mixed methods as mixed-method comprised of one component that is complete and one or more supplementary components that are incomplete,11 then the first weighting option, QUAL and QUAN, identified by Creswell and Plano Clark is not possible. This is because the supplemental component, and therefore the strategies it uses, is by definition incomplete. It provides additional data that can supplement in some way the findings of the core component and thereby extend the findings of the overall study. Therefore, it cannot be of equal weight or afforded the same priority as the core component. This means that the weighting option QUAN QUAL in the one mixed-method study is not possible. QUAN QUAL would be a multiple methods study where both components are full studies and the results of each of the studies can be compared to build a picture about different aspects of a research problem area of interest. What these different views about the possibility of having QUAL and QUAN weighting options in the same study highlights is a point that we have made previously, and that we will continue to emphasize in this chapter. This is the point that what understanding you have of what mixed methods is affects all parts of your thinking when designing your mixed methods research. It is important that your design is congruent with those understandings. This means that when you are designing a mixed methods study, you will need to make explicit what this understanding is and then make sure that the entire design is in keeping with that understanding. Thinking Through, and Deciding About, Matters Related to the Timing of the Components A second key consideration when thinking through the design of your mixed methods study relates to when the various components that make up the design will be undertaken in the study. This is a consideration related to the relative timing or pacing (Morse &
Chapter 10 • Designing Research Using Mixed Methods   251 Niehaus, 2009) of the components. For example, you will need to think about whether the components will be conducted at the same time or will one method/type of collection of data follow the other, and why. In the mixed methods research literature, a + sign is commonly used to indicate that the components will be conducted at the same time (referred to as concurrently or simultaneously) and an arrow → used to indicate that they will be conducted one after the other (referred to as sequentially). Mixed methods designs are often drawn, named, and/or described using these notations about pacing/timing as well as those related to weighting/priority. Often, three types of “basic” or “core designs” for mixed methods research are identified. These are the following: • The convergent design – “involves the separate collection and analysis of quantitative and qualitative data. The intent is to merge the results of the quantitative and qualitative data analyses” (Creswell, 2015, p. 36) • The explanatory sequential design where the intent is to “begin with a quantitative strand and then conduct a second qualitative strand to explain the quantitative results” (Creswell, 2015, p. 38) • The exploratory sequential design where the intent is to “first explore a problem through qualitative data collection and analysis, develop an instrument or intervention, and follow with a third quantitative phase” (Creswell, 2015, p. 39) Figure 10.1 is a diagrammatic representation of these three types of core designs developed by Creswell and Plano Clark (2018) and reproduced with permission here. FIGURE 10.1 ■ General Diagrams of the Three Core Designs Note: Taken from Creswell and Plano Clark (2018, p. 66). Reproduced with permission On the other hand, Morse (2017; Morse & Niehaus, 2009) identify eight possible types of basic mixed methods designs. These basic designs can be divided into two groups: 1. basic types of mixed methods research designs using qualitative and quantitative components, and
252  Research Design 2. basic types of mixed methods research designs using two different qualitative components or two different quantitative components. Each of these groups is comprised of four possible basic designs. Morse explains: Considering simultaneous (+) and sequential (→) pacing, and combinations of the qual and quan supplemental projects, we have four basic mixed-method designs that use both qualitative and quantitative methods: QUAL + quan (Qualitatively-driven with a simultaneous quantitative supplement) QUAL → quan (Qualitatively-driven with a sequential quantitative supplement) QUAN + qual (Quantitatively-driven with a simultaneous qualitative supplement) QUAN → qual (Quantitatively-driven with a sequential qualitative supplement) (Morse, 2017, pp. 6-7) She continues: Further if we add two different methods that use two qualitative or two quantitative components, there are four more designs: QUAL + qual (Qualitatively-driven with a simultaneous qualitative supplement) QUAL → qual (Qualitatively-driven with a sequential qualitative supplement) QUAN + quan (Quantitatively-driven with a simultaneous quantitative supplement) QUAN → quan (Quantitatively-driven with a sequential quantitative supplement) (Morse, 2017, p. 7) This way of grouping types of mixed methods research designs builds on, and is congruent with, Morse’s thinking about, and understanding of, mixed methods research— a single mixed-method study comprising a core and incomplete supplementary component(s). The second group of four basic designs identified by Morse would not be considered mixed methods by other mixed methods scholars if their understanding of mixed methods requires the use of both qualitative and quantitative components or strands in the same study. In line with Morse’s view of what mixed methods research is, the design QUAN + QUAL does not appear in this list12 —however, other researchers would add this if in their thinking about, and understanding of, mixed methods research, it is possible to give equal priority to the components in the design. This highlights that any typology or classification of mixed methods research designs, not just the one by Morse, is built on assumptions about what mixed methods is, and what mixed methods are for. Any typology or classification of basic mixed methods designs cannot be understood apart from the thinking and understanding of mixed methods research that gave rise to it. If you copy and paste and/or “select” a design from some sort of typology of mixed methods designs, you also copy, paste, and select those understandings—therefore, you need to make sure that you know what they are. How you design your research should then be in keeping with this view of what mixed methods is.
Chapter 10 • Designing Research Using Mixed Methods   253 TIP NAVIGATING CHANGING USE OF TERMINOLOGY, AS WELL AS VARIATIONS IN TERMINOLOGY USED FOR THE SAME IDEA One of the problems that you no doubt will have when reading about basic or core mixed methods designs is the changing use of terminology in the area as scholars’ thinking has developed rapidly over relatively short periods of time. For example, the following table from Creswell and Plano Clark (2018) shows the changes and adjustments in their thinking about types of core mixed methods designs over time. FIGURE 10.2 ■ Our Changing Typologies Our 2003 Typology Our 2007 Typology [Creswell, Plano Clark, [Creswell & Plano Gutmann, & Hanson, Clark, 2007] 2003] Our 2011 Typology [Creswell & Plano Clark, 2011] Our Present Typology of Core Designs Sequential explanatory Explanatory design Explanatory sequential design Explanatory sequential design Sequential exploratory Exploratory design Exploratory sequential design Exploratory sequential design Sequential transformative Transformative design Concurrent triangulation Triangulation design Convergent parallel design Concurrent nested Embedded design Embedded design Concurrent transformative Convergent design Transformative design Multiphase design Note: Taken from Creswell and Plano Clark (2018, p. 59). Reproduced with permission. Rapid and constant change in terminology and typologies, such as in Figure 10.2, can make the field of mixed methods research very confusing at times. It is important to make sure that you are aware of possible changes and adjustments such as these in order not to be overwhelmed by the seemingly endless amount, and layers, of terminology that many scholars believe has made the field of mixed methods overly complicated. It is also important that you are alert to the variations in terminology used for the same idea (e.g., timing and pacing or priority and weighting) when reading a textbook or a journal article about mixed methods research. AN EXAMPLE OF HOW TO CONNECT PURPOSE, PRIORITY, AND TIMING AND WHY THIS MATTERS An important part of designing mixed methods is to establish why using mixed methods in some way will strengthen your research. How will it provide enhanced understandings of the problem or issue that your research is focused on and being designed to
254  Research Design address? Decisions about how methods might be mixed or how components might be prioritized and paced can only be made if first the purpose for that mixing is clearly understood. Deciding where, when, and how they will be brought together or interfaced (Morse, 2017) in a mixed methods design depends on what the problem you are addressing is. As Greene (2008) points out, “[A] given mix of two methods, say structured surveys and in-depth interviews, can accomplish several different mixed methods purposes, and thus be characterized by several different mixed methods design dimensions” (Greene, 2008, p. 17). For example, consider the following scenario: • A researcher has been asked to design and conduct a research study for a local government authority that is concerned about low COVID-19 vaccination rates in that local government area. The local authority needs information able to be used for developing strategies to optimize COVID-19 vaccine uptake in the area that it is responsible for. • The study purpose has been identified as to 1. identify barriers and enablers related to why people in the area decide or don’t decide to be vaccinated and then 2. use this information to inform the development of strategies for optimizing COVID-19 vaccine uptake in the geographical area. • Mixed methods has been identified as a suitable research approach to be used. This is because a mixed methods study design is well suited to this study as it can provide a more complete picture of what is going on in this context related to why people decide to vaccinate or not vaccinate. It can do this by combining information from complementary kinds of data or sources. However, the researcher still has a lot more thinking to do before they are in a position to actually begin this mixed methods study. This is because there are different options for how this study might be designed in relation to the priority and timing of the components (structured surveys and in-depth interviews) that will make up that study. Which option the researcher chooses will depend on what they want to know more about related to the purpose of the study. For example, consider the following options for designing this study. Option One: A QUAL → quan Design The researcher decides that they need to know more about what the barriers and enablers are in this specific context. To do this, they decide that they will use qualitative in-depth interviews, the purpose of which is to find out how and why people in this geographical area think, and then make decisions, about whether to be vaccinated or not. What are participants’ perceived barriers or enablers related to deciding to be vaccinated? The analysis of the interviews will reveal a range of perceptions, understandings, and experiences related to those barriers and enablers. This range of perceptions, understandings, and experiences about barriers and enablers can then be used to inform the development of items for a structured questionnaire forming part of a quantitative survey study. This will enable the researcher to identify, among other things, how common these perceptions, understandings, and
Chapter 10 • Designing Research Using Mixed Methods   255 experiences are in the local population of interest as well as the degree of importance particular segments of the population of this local geographic area attribute to each of these characteristics. Timing and Priority. The design of this study is QUAL → quan. The core component providing the overall inductive theoretical drive of the project is the in-depth interviews (QUAL), the emergent findings of which are then supplemented by the quantitative data provided about the findings of the interview study using a quantitative survey (quan). Type of Basic Design. As such it is an example of a qualitatively driven (QUAL) mixed methods project comprising a core QUAL component and a supplementary quan component designed to provide more information about the findings of that QUAL component. It is a sequential mixed methods design: QUAL → quan. Purpose: • Producing a more complete picture of what is going on the context of interest (perceived barriers or enablers related to deciding to be vaccinated in the local government area) using inductive analysis of qualitative in-depth interviews to identify what these barriers and enablers are perceived to be, and why • Using supplementary data from the survey to identify the frequency of and degree of importance particular segments of the population of this geographic area attribute to each of these characteristics • Enabling the development of targeted, derived from the ground up rather than imposed, strategies to address the issues around vaccine uptake. Option Two: A QUAN → qual Design The researcher decides that they need to know more about what the barriers and enablers are in this specific context. They decide that to find this out they will develop a closed questionnaire for a quantitative survey that they will conduct in the geographical area. They also decide that they will use variables and factors that have been identified in other studies related to uptake or non-uptake of vaccinations to inform the items they will use for the questionnaire. The analysis of the survey will provide a picture of the frequency and distribution of those variables or factors throughout the geographic population of interest. To gain a better understanding of the reasons for this frequency and distribution, the researcher conducts a series of in-depth interviews designed to probe and add qualitative dimensions to the findings of the survey. A purposive sample of participants for the in-depth interviews is developed from information-rich people living in the geographical context of interest. The interviews probe areas such as these: What is it about a specific variable identified in the survey as affecting the rate of vaccination, for example, the perceived trustworthiness of sources of information about a vaccine, that affects the rate of uptake of a vaccination? Why is this? How might issues identified as being related to this variable be addressed? Timing and Priority: The design of this study is QUAN → qual. The core component providing the overall deductive theoretical drive of the project is the quantitative survey (QUAN), the findings of which are then supplemented by the supplementary qualitative interview data (qual) about the findings of that survey.
256  Research Design As such, it is an example of a quantitatively driven (QUAN) mixed methods project comprising a core QUAN component and a supplementary qual component designed to provide more information about the findings of that QUAN component. It is a sequential mixed methods design: QUAN → qual. Purpose: • producing a more complete picture of the information gained from the survey alone by combining information gained from that survey with complementary kinds of data or sources (the in-depth interviews) • enabling development, further analysis, and refinement of the survey findings using the contrasting kind of data gained from the in-depth interviews about those survey results. This includes context specific rich and in-depth data about the views and perceptions of people in the area of interest about the variables of interest. • enabling development, further analysis, and refinement of the survey itself for future research purposes by gaining insights about how the items in the survey questionnaire were understood or identifying other possible variables or items of interest for that survey. What Can We Learn From This Example? This example reveals two different options for a mixed methods research design where • both address the same issue (barriers and enablers related to why people in an area decide or don’t decide to be vaccinated and then use this information to inform the development of strategies for optimizing COVID-19 vaccine uptake in the geographical area), • both use the same methods (structured surveys and in-depth interviews), • but each assigns different priority and timing to the structured surveys and the in-depth interviews reflecting differences in purpose for the mixed methods research being undertaken in each of the options. In option one, the structured survey provides supplementary data for the interviews, whereas in option two the interviews provide supplementary data for the structured survey. The key point in all this is that priority and timing are not just simply decisions about which type of data will be collected and when in a mixed methods study. Priority and timing related decisions are also based on thinking about why that data will be collected in the way it is. How will collecting the different types of data in the way that you do help you answer your research questions? What do you want those sets of data to be able to do or add that just having one set of data cannot? Considerations related to timing or pacing also require you to think about when and where in the research design process the data and results related to the various components of the study will be brought together, mixed, or integrated in some way. What will be brought together and how? When and where will this happen in the research process? And very important, how will this enable the production of “findings that are greater than the sum of parts” (Woolley, 2009, p. 7)?
Chapter 10 • Designing Research Using Mixed Methods   257 In the next section, we develop this point by taking a closer look at the idea of mixing, which, along with priority and timing, is a central consideration when designing mixed methods research. Activity Thinking About How to Report “Findings That Are Greater Than the Sum of Parts” If the aim of using a mixed methods approach is to enable the production of “findings that are greater than the sum of parts” (Woolley, 2009, p. 7), then is it possible to report the components of a mixed methods study separately? This is an issue about which you will come across very different viewpoints. Different journals may even have different views and policies related to this. Explore this issue further by considering the following questions. If possible, debate them with other mixed methods researchers or students: • Is “a helpful way to think about writing empirical articles for mixed meth- ods research is to think about generating three written products from a single study: a quantitative article, a qualitative article, and an overall mixed methods article” (Creswell, 2015, p. 92)? Why or why not? • Is it sufficient to simply state that a written product is a report of one of the components of a larger single mixed methods study? Why or why not? • Why do you think that the journal Qualitative Health Research does not publish stand-alone reports of “the supplemental component from a quantitatively driven study—QUAN-qual?” (Cheek, 2021, p. 1017; you might find it useful and interesting to read the entire editorial from which this quote was taken). MIXING—A CENTRAL CONSIDERATION IN MIXED METHODS RESEARCH Mixing is a reflexive process in which conceptual ideas, derived from different ways of thinking, for example, data or methods, are woven and re-woven “into a meaningful pattern and a practically viable blueprint for generating better understanding of the social phenomena being investigated” (Greene, 2007, p 16). The idea of “mixing” is central to mixed methods thinking and a “part of the specialized language of mixed methods researchers” (Creamer, 2018, p. 5). Considerations, such as what will be mixed, when, and how in your study, are key decisions when designing mixed methods research. Such mixing can occur at one or more levels—“methods, methodologies, and paradigms—as well as substantive mixing across disciplines and theories” (Greene, 2015, p. 607). This means that when you are designing your mixed methods study, you will need to think about, and make explicit, at which of these levels, and how, such mixing will occur. For example, if your mixed methods study design comprises qualitative and quantitative components, you will need to consider what it is related to those qualitative and quantitative components in the research design that will be mixed. Will it be data, methods, ways of thinking, logics of inquiry, all or some of the above? How, and why, will this
258  Research Design choice of what is to be mixed enable you to better answer the question(s) driving your research in the first place? The same types of questions apply if you take the view that mixed-method research involves the mixing of data obtained from one component that is complete in that study, with data from another component that is incomplete in that study. How will the findings from the supplemental component(s) be integrated with the findings of the core component? When and how in your design will the point(s) of interface between the two types of data be? How will the integration of findings from the supplemental component(s) with the findings of the core component enhance the result of the core component in some way, and in turn provide findings that are greater than the sum of the parts? “Mixing” as a Concept Not Just a Procedure When designing mixed methods research, it is important to focus your thinking on mixing and what it involves throughout the entire process of designing your research. Mixing is not a self-contained step or discrete procedure in a linear step-by-step mixed methods research design. Rather, the intention to “mix” (Creamer, 2018), or “mixing” of some kind, is the thread that runs throughout your research design giving it coherence as a single study. Put another way, mixing is a conceptual process, not simply a procedural one. The point of interface in your research design is where the components making up your research design are brought together in some way, usually (but not always) occurring “once the analysis of each component is completed” (Morse, 2017, p. 9). However, simply identifying the point(s) of interface of components in a mixed methods research design is not what mixing is or what thinking about mixing is limited to. There is more thinking to do about mixing than determining a single or predetermined point in your research design where “mixing” happens. For example, there is a lot of mixing or intention to mix related thinking that has had to occur to get to that point of interface. This includes the thinking underpinning your choice of the components that will make up your design, as well as priority and timing considerations related to them. Further, there is still a lot more mixing related thinking to do after identifying that point of interface. For example, how will the results of those interfacing components be synthesized, presented, and reported in an integrated way. This makes the question “Around what does the mixing happen?” (Greene, 2008, p. 17) an important one to consider from the outset, and throughout the process, of designing mixed methods research. There are different levels of focus at, and around, which mixing can occur. In the next section, we take a closer look at these levels. Incorporating Different Levels of Focus Into Your Thinking About Mixing When designing your mixed methods study, mixing, as we have emphasized in the previous discussion, should not be thought of as, or occurring at, a static or fixed point. There are various levels at, and ways in which, this mixing might occur. These multiple levels include “different types of methods and data . . . different logic of inquiry, different ways of knowing . . . different perspectives on understanding important social phenomena” (Greene, 2015, p. 608). However, not all mixed methods studies will mix components of the research within or across all these levels.
Chapter 10 • Designing Research Using Mixed Methods   259 Which levels mixing occurs at in a specific study will depend on what the study is being designed to do— the research problem and questions that the mixed methods study is designed to address. As Greene notes, This possibility of mixing at multiple levels is integral to the character of mixed methods approaches to social inquiry, even though not all mixed methods studies do or even should mix at multiple levels. That is, in some contexts a mix of methods alone is most justifiable, while in others a limited methods mix may represent a missed opportunity for deeper insights (Greene, 2015, pp. 607–608). What all this means is that there is not a recipe or set procedure to follow about what should be mixed and at what level. Instead, in the context of your specific mixed methods study, you will need to think through what these levels might be, and then make decisions about at which of these level(s) mixing will occur, when designing that study. Explicitly declaring what your decisions have been, and why you made them, is an important part of ensuring the quality of a mixed methods study’s design, findings, and the inferences derived from them (Greene, 2015). Consequently, when designing and conducting mixed methods research, you will need to think about how mixing will occur at a number of levels such as • the overall project design (e.g., methods or components used), • the data collected (e.g., textual and numeric data), • the analyses of that data (e.g., the findings of each component). (Yin, 2006) It also means that you will need to think about how within each one of these specific areas of the project (design, data, and analyses) mixing will occur. Remember, you may not necessarily need to employ mixing in all of these areas, depending on the purpose of your study. Which specific mixing strategies you employ in your design, and in which areas of that design, will depend on the purpose of your research, and in turn, the purpose of collecting different types of data or employing different integration strategies related to that data. Therefore, matters related to mixing are first and foremost matters of thinking about what will be mixed and why, after which considerations and decisions about how that mixing might occur can be made. Keeping your focus on mixing when designing and conducting mixed methods research will enable the integrity of the single mixed methods study to be maintained and not, as Yin (2006) put it, “decompose into two or more parallel studies” (p. 41). What About Paradigmatic-Related Considerations When Mixing Methods? Some years ago, one of us was asked the following question by leading mixed methods researcher Sharlene Hesse-Biber: “What do you think about researchers who say you can’t mix quantitative and qualitative research methods because they arise from two different paradigms and that’s not acceptable on philosophical grounds?” (Hesse-Biber, 2010b, p. 31, italics removed from original). This is a question that you will need to think about when considering using mixed methods research. It is an important question about inquiry paradigms13 and warrants some thinking time.
260  Research Design Qualitative and quantitative approaches to research differ with respect to the methodological and onto-epistemological understandings embedded within them. Methods derived from qualitative and quantitative approaches and methodological thinking cannot be viewed apart from the thinking that gave rise to those approaches in the first place. Therefore, if we are going to mix qualitative and quantitative approaches in a single study, we are also going to have to think through how to navigate the mixing of the different assumptions and understandings implicit in qualitative and quantitative methodological thinking. When reading in the area you will find different views among scholars about the answer to this question posed by Hesse-Biber. They include Reply 1: Pragmatism as a possible alternate paradigm for mixed methods research. Some mixed methods scholars suggest pragmatism is an alternate paradigm that can enable us to move past the paradigm debates of the past. Pragmatism is a perspective that sets aside debates about philosophy in favor of what works in a particular setting or for a particular set of research questions. . . . Pragmatists approach research with the purpose of producing something that will be both practical and useful (Creamer, 2018, pp. 45–46, italics removed from original). However, a word of caution is needed at this point. Phrases such as “what works” should not be read as meaning “anything goes” when designing mixed methods research employing pragmatism as a paradigmatic or organizing construct for that study. As Greene notes, “[F]or practitioners to be meaningfully guided by an alternative paradigm [such as pragmatism], they must not only understand it but also understand just how it is intended to influence their methodological decisions” (Greene, 2008, p. 13). Therefore, she urges us to ask questions of our research design as we think it through such as “How do the assumptions and stances of pragmatism influence inquiry decisions? . . . What does knowledge that integrates knowing and acting look like and how is it validated?” (p. 13). Reply 2: Take an a-paradigmatic stance. Some scholars adopt what is sometimes referred to as an a-paradigmatic stance in relation to mixed methods research and what can be mixed. These “scholars see the epistemology–methods link as distracting or unnecessary and simply ignore it, continuing to work as they always have worked, using whatever methods seem appropriate for the question at hand” (Teddlie & Tashakkori, 2003, p. 18). In such a stance, while paradigms are useful for providing insights into, and contexts for, the thinking behind the methods and methodologies that make up mixed methods studies, they do not determine the choice of what mix of methods can be used in a particular study. What does is the question(s) being asked. Critics of the idea of an a-paradigmatic stance suggest that such a stance “sidesteps the paradigm issue by ignoring it” (Hall, 2013, p. 73). They argue that all researchers make decisions, such as what is data and how it can be interpreted, based on their paradigmatic stance—even if they don’t acknowledge it. Therefore, while the paradigmatic stance adopted by a researcher may not be made explicit . . . [t]his does not mean that they don’t have one. . . . This means that the a-paradigmatic
Chapter 10 • Designing Research Using Mixed Methods   261 stance . . . can’t be sustained as a viable approach to justify mixed methods research since no research is paradigm free. (Hall, 2013, p. 73) Reply 3: An incommensurable stance Other scholars argue that it is not possible to mix paradigms. This view draws on the foundational work by Lincoln and Guba (1985) and their idea of the “incommensurability” (Guba & Lincoln, 1994) or incompatibility of some paradigms such as post positivism and constructivism where “the basic incompatibility of the two paradigms means that no single research study can credibly embrace both” (Yin, 2015, p. 653). For example, the contradictions between paradigms related to assumptions about causality and generalizability (Yin, 2015) or the nature of truth (Small, 2011, p. 77). Therefore, it is not possible to mix paradigms in the same study. These scholars question whether a single mixed methods study design using methods derived from incompatible paradigms is actually possible. Critics of this thinking term it “purist” thinking and argue that only very few methods are linked to a single paradigm. They point out that qualitative approaches, the methods they use, and the data they produce can draw on different paradigms. So can quantitative approaches, the methods they use, and the data they produce (Creamer, 2018). TIP IF YOU WANT TO READ MORE IN THIS AREA Greene developed a very useful table to reflect what various scholars “think about the character, value, and role of traditional paradigmatic assumptions in mixed methods inquiry” (Greene, 2008, p. 11). She identified six positions that she called “Mixed Methods ‘Paradigm Stance’” (Greene, 2008, p. 11). This provides a useful overview of, and entry point for, the different things you will need to think about to get a sense of the lie of the land in this area. Summing Up Our Discussion of Mixing What our discussion of mixing highlights is that like most other areas of mixed methods research, the idea of “mixing” is an area about which there are different points of view and emphases. However, there is at least one point that all mixed methods researchers agree on. This is that simply having more than one method, component, or strand present in our research, or collecting more than one type of data when “doing” our research, does not necessarily mean that we have some sort of mixed methods study. This is the case no matter how well developed the procedures related to those methods, and the data collected, are. In fact, Teddlie and Tashakkori (2011) consider “designs in which two types of data are collected, but there is little or no integration of findings and inferences from the study” (p. 294) as “quasi-mixed designs” (p. 293) rather than mixed methods research designs. Rather, it is what those methods or that data are being used for, and why integrating them in the way that you do in your design enables you to better address the issue or problem that the research is centered on, that makes your study a mixed methods one. Therefore, mixing is “the key mechanism for the value-added of mixed methods research and its potential to produce creative and coherent explanations and conclusions . . . which increases the comprehensiveness, cohesiveness, robustness, and theoretical power of inferences and conclusions” (Creamer, 2018, p. 82).
262  Research Design STRATEGIES FOR NAVIGATING THE COMPLEX AND CONTESTED FIELD OF MIXED METHODS RESEARCH Given the range of views about most aspects of mixed methods research, designing mixed methods research can seem a daunting task. In this section, we offer some strategies for how you can navigate the large, complex, contested, and frankly at times, muddy terrain of mixed methods research. Strategy One: Use Diagramming (Not Just Diagrams) as a Way of Keeping Track of the Decisions You Make About Your Emerging Mixed Methods Design. A useful way of keeping track of priority and timing in mixed methods designs is the use of diagramming. Diagramming (Morse, 2017), as its name suggests, is the process of developing a diagram about, or representing in diagrammatic form, what you have thought through, and then made decisions about, when designing your mixed methods research study. The diagram produced is a static representation of this process of thinking and therefore cannot be fully understood removed or decontextualized from the process of diagramming giving rise to it. Diagramming the process of designing your mixed methods research forces you to think through the reasons for your decisions during that process and justify them—both to yourself, and to others. It also helps you to track how your research actually is unfolding. Few, if any, research projects follow a set plan or procedure without at least some modifications and changes—all of which need to be tracked, explicitly acknowledged, and justified. Diagramming may outline in overview the entire mixed methods research design. For example, at the outset of the research diagramming can be used to illustrate and communicate . . . how each component will be conducted—how they will be paced and sampled, what type of data will be collected and analyzed, and what findings are expected. It will show the location of the point of interface. It may even show what the final results will look like, and may extend to dissemination. (Morse, 2017, p. 10) However, remember this can change as the design unfolds. Therefore, diagramming may also be used to focus on what happened when conducting the mixed methods project and indicate where modifications were made to the original design. In this way, diagramming can help both the researcher, and the reader of reports of the research, track or follow what was happening throughout the mixed methods research process in terms of design related decisions or modifications made—including when and why they were made. Diagramming is thus a type of audit trail of the thinking and decision-making throughout the mixed methods research process. The key point here is that diagramming is an active, dynamic, and reflexive process that arises from thinking about the active, dynamic, and reflexive process of designing mixed methods research. Diagramming is part of the craft of designing mixed methods research enabling you to track and make sense of the decisions you make about the shape that your mixed methods design takes. It also enables others to follow the decisions that you have made.
Chapter 10 • Designing Research Using Mixed Methods   263 Strategy Two: Keep Folding Back Reflexively on Your Own Thinking, and the Decisions You Are Making or Have Made, When Designing and Reporting Your Mixed Methods Research. At the heart of any process of navigating the mixed methods research-related terrain is reflexive thinking. Any mixing of methods, purpose, data, or theories will only be as good as the reflexive thinking work that you have done about that mixing. Therefore, it is somewhat surprising that there has not been a lot of writing about the reflexive thinking that researchers actually did about how, and why, “mixing” of some kind added to or assisted them in answering the research questions that their research was designed to address. What led them to make the decisions that they did about their mixed methods design? What did they base those decisions on? What struggles did they have along the way? Published research reports and presentations about mixed methods research studies tend to gloss over these struggles and how they were resolved. Instead, the mixed methods study is presented as unfolding step by step exactly according to some preset plan or typology. Yet as all researchers know, no matter what approach to research is being employed, a research design on paper, and actually putting that research plan into practice, are two very different things. This is because when making decisions about your research, both when designing and then doing that research, you are putting “if then” thinking (Morse & Niehaus, 2009) into action. For example, If I ask this question, I will need to sample here from these folk, data will look like this-and-that, I will analyze it this way, and at the end of the day I will know thus-and-so . . . but I still would not have information about this or that. (Morse & Niehaus, 2009, p. 78) And if I don’t have information about this or that, will that matter? What will I gain or lose by having or not having it? If I do add more components than originally planned in some way to the research, how will the methods be integrated in some way and where in my study design will that integration take place? Asking these types of questions requires you to constantly fold back on your thinking and decisions in the iterative development of your mixed methods research design. PUTTING IT INTO PRACTICE AN EXAMPLE OF PUTTING THIS STRATEGY INTO ACTION One of us was involved in a study “designed to investigate the experiences of, and effects on, a cohort of young adults who participated in a mindfulness training (MT) curriculum at an elementary (primary) school in the mountain west region of the United States in the 1990s” (Cheek et al., 2015, p. 752). The study design is summarized in Figure 10.3. It comprises two projects, one of which is a mixed methods design. While the figure looks neat and orderly, in fact the development of the design required a great deal of thinking and reflexivity. At the end of the study, the team wrote a paper specifically about putting such “if/then” thinking into action throughout the
264  Research Design FIGURE 10.3 ■ Study Design Overall programmatic research question: To explore the effects of mindfulness training experienced by elementary school students 15 – 20 years ago Note: From Cheek, et al. (2015, p. 753). Reproduced With Permission. process of developing this mixed methods research and how this affected the study design. The article focused “on what we did, why we did it, what happened when we did it, and what we have learned from all this in a specific research study” (Cheek et al., 2015, p. 752). For example, there is a detailed description and then analysis of how the team’s thinking about the study design changed as they thought through what they really wanted to know about and why. The team proposed that this sort of reflexive thinking constitutes a “‘second’ set of findings” (Cheek et al, 2015, p. 754) in the study. The first set of findings relate to the “more ‘traditional’ substantive results/findings/discussions . . . [offering] insights into the experiences of the children in the classroom of participating in the MT curriculum, as well as the effects of such participation at the time” (p. 754). The second set of findings relate to “methodological and research design aspects of the study. These findings provide new insights into mixed-method and multiple methods research approaches derived from actually using these approaches in a project” (p. 754). Therefore, when writing the article, and reporting this second set of findings, the aim was to surface and highlight the dynamic and reflexive thinking which underpinned the design of the study—thinking that “is not made explicit, and therefore remains hidden, in most research reports: an absence that this article seeks to redress by focusing on this ‘second’ set of findings” (Cheek et al., 2015, p. 754). Exposing and analyzing this second set of findings can counter simplistic and unthought-through assumptions about mixed methods research such as the argument that simply having more methods results in a better study or better results. Distilling all this down to one key message, it is important to show how and why designing and conducting your mixed methods study in the way that you did resulted in richer data and stronger inferences.
Chapter 10 • Designing Research Using Mixed Methods   265 Strategy Three: Don’t Go It Alone. Get Help and Find Support Along the Way. Teaming up with others could prove to be a good strategy to make sure you move forward in your thinking when navigating the complex field of mixed methods research. Greenwood and Terry (2012) provide a unique reflexive account of how a group of doctoral researchers from the field of nursing navigated and addressed the challenge of “[d]esigning and implementing a mixed methods project” (Greenwood & Terry, 2012, p. 98). A reading group comprised of four students and two supervisors formed as a means of “grappling with the methodological and design issues of mixed methods research” (p. 98). Their account identifies “[f]ive principal challenges for novice researchers” (p. 100) namely, • Adopting and defending a mixed methods design • Understanding the meaning of paradigm and its implication for the research project • Identifying stages of mixing or integration in the individual projects • Examining issues that threatened rigor • Exploring and developing skills in “writing up” a mixed methods project (points are taken from Table 2 of Greenwood & Terry, 2012, p. 100) For each of these challenges they provide a description of the challenge and a possible way forward produced by the collective reflexive thinking work done by that reading group. For example, a way forward identified by the group in relation to the first challenge— “adopting and defending a mixed methods design”—was as follows: Perhaps this is not such an issue because mixed methods emerged to allow research questions to be addressed with greater flexibility, and a predetermined design strategy could inhibit future creative efforts that might fall outside of these perspectives. . . . The group struggled with the notion that a research question should “fit” into an established typology. Instead the group supports innovative and flexible designs driven by the research question thereby removing self-imposed constraints and thus allowing the research design to become an “enabler” rather than a “constrainer” for addressing real world problems. (Greenwood & Terry, 2012, p. 101) In addition, based on their experience of being part of this reading group, the authors provide an excellent reflexive account of considerations for forming a reading group, how they managed that group including challenges that arose, and what made it work. Both of these interconnected reflexive accounts—one focusing on the issues faced by novice researchers entering the mixed methods terrain landscape, and one focusing on how a reading group was used to navigate this terrain, provide “sign posts” (Greenwood & Terry, 2012, p. 98) to assist others on their mixed methods journeys. CONCLUSIONS The discussion in this chapter has demonstrated that using mixed methods well in a research study requires a lot of thinking. Thinking not just about how to mix methods in some way, but also reflexive thinking about why to mix those methods in the first place,
266  Research Design and why in this way and not that way. To think in this reflexive way, you will need to be well versed in the approaches and methods that are being mixed, as well as well versed in the body of knowledge about the idea of mixing methods itself. Consequently, when thinking about how to structure, as well as where in the book to place, this discussion of designing mixed methods research, we faced the same problem that Morse (2017) identified when she was teaching students about mixed methods. My problem, when teaching mixed-method design, is that readers and students are expected to know a lot about research at the start—to know and understand many concepts, procedures, rules, and research principles-in-action, all at once (Morse, 2017, p. xi). This is why this chapter appears so late in this book—the next to last chapter. We needed to have had other discussions first. The discussions in Chapters 4, 5, 6, 7, 8, and 9 needed to precede this chapter. What we hope you will have as a result of reading this chapter in conjunction with the other chapters in this book, is a better understanding of the various layers of thinking needed in order to make informed decisions about the type of “mixing” you will employ in your research design and why. This is a type of thinking that will enable you to justify those decisions and thereby add to the rigor and trustworthiness of your research. It is also a type of thinking that can enable the potential offered by mixing methods to be met. Throughout the chapter, we have aimed to open up rather than shut down thinking about designing research using mixed methods. We did not want the chapter to default to what some (actually many) discussions of mixed methods have become: versions of “the comparatively drab days in the middle of the previous century, when becoming a social inquirer was a matter of learning the ‘the proper methods properly applied’ (Smith, 1983)” (Greene, 2015, p. 606). Mixed methods is a lively and alive field, not a drab and prosaic one! The type of reflexive thinking the chapter advocates is part of responsible research design and being a responsible and reflexive methodologist (Kuntz, 2015). It requires us to think about why, and then (and only then) how we will use mixed methods so that our design is more than a set of techniques. For as Greene reminds us, “Responsible social inquiry today requires thoughtful consideration of methodology’s multiple layers, well beyond the technical layer” (Greene, 2015, p. 606). Thinking how to do this and then designing research around those considerations is what designing research mixing more than one method in some way is all about. SUMMARY OF KEY POINTS • There are various understandings, and different emphases, about what a mixed methods research approach is. • One understanding is that a mixed methods research approach is a method that combines qualitative and quantitative approaches in one study. • Another understanding is that employing a mixed methods research approach means employing a way of thinking that combines aspects of qualitative and quantitative thought.
Chapter 10 • Designing Research Using Mixed Methods   267 • A third understanding is that a mixed methods research approach is an approach where (1) more than one method is used in a single study, (2) where one method is complete but the other method(s) is incomplete in some way, and (3) those methods do not have to comprise both a qualitative and a quantitative component. • If you choose to use a mixed methods approach in your study, then you will need to think through, and say something explicitly about, why mixing methods in your study design will give you “more” (e.g., a better, fuller, or deeper answer to your research questions) than using a single method. • Just having two methods present in a study, or collecting two types of data, does not necessarily make your study mixed or your study design a mixed methods one. • When designing mixed methods research you need to think about (1) the priority given to the components in the study in relation to the overall logic of inquiry that is shaping the mixed methods study; (2) when the various components that make up the design will be undertaken in the study—timing; and (3) what will be mixed, when, and how in your study—mixing. • Different basic research designs for mixed methods research reflect differences in the timing and priority of the components making up that research. • Mixing is an analytical concept that permeates the entire process of designing mixed methods research: it is not simply a procedure or point in that design. • There are a number of possible levels at which mixing can be thought about or occur, as well as strategies associated with that mixing at those levels. This can range from incorporating findings from one component that builds onto a second component to integrating or weaving two components such that one informs the other. The bringing together of qualitative and quantitative components can also take place across the research process through mixing of methodologies, data collection methods, and mixing of analytical and interpretive techniques. • What levels mixing occurs at, and what associated strategies are used for such mixing at those levels, in any specific mixed methods study depends on the purpose for the mixed methods study in the first place. • Aids that can assist you navigate this complex research approach include diagramming, folding back reflexively on your own thinking, and finding support along the way when designing your research. KEY RESEARCH-RELATED TERMS INTRODUCED IN THIS CHAPTER capitalization core component diagramming mixed methods research approach mixing notation systems point of interface priority or weighting of components qualitative component quantitative component quasi-mixed designs supplemental component timing or pacing
268  Research Design SUPPLEMENTAL ACTIVITIES 1. Putting diagramming into practice. Find a research article reporting what is described as a mixed methods study. After reading it, try diagramming the study to provide an overview of the design of the mixed methods project. OR If you are in the process of either designing, or conducting, a mixed methods study, try diagramming the study design you have come up with or are developing. When doing so, • at the top of the diagram indicate whether it is a qualitatively or quantitatively driven design. • Next identify the components that make up the research design. • Indicate the priority and timing of the components. • Use the notations QUAL/ QUAN; qual/quan; +/→ when doing so (remember there may be more than one supplementary component). • Identify the type of data produced by each component and what methods or strategies were used to do so. • Indicate where the point(s) of interface of the results of each component in the study are in the research design. If the article you choose already has a diagram, then work backwards to match what is represented in the diagram with what is reported in the text of the article. If you are having trouble finding an article, try using Leung, D. Y., Kumlien, C., Bish, M., Carlson, E., Chan, P. S., & Chan, E. A. (2021). Using internationalization-at-home activities to enhance the cultural awareness of health and social science research students: A mixed-method study. Nurse Education Today, 100, Article 104851. https://doi. org/10.1016/j.nedt.2021.104851 2. If you are in the process of designing research, and are considering using a mixed methods approach in that design, write down as precisely as you can a. b. c. Why using more than one method will enable you obtaining better, fuller, or deeper answers to your research questions than using a single method. What understanding you have of what mixed methods is. How that understanding affects different aspects of your research design If you are not in the process of designing research, think of a topic that you might consider using a mixed methods research approach for investigating. Thinking about that topic, answer (a)–(c) above. Addressing point (a) will provide the rationale for using a mixed methods approach in your research design in the first place. Addressing point (b) will enable both you and the readers of your work to locate your study in a specific way of thinking about, and understanding of, mixed methods research. Making clear what your understanding of mixed methods research is is imperative in order to
Chapter 10 • Designing Research Using Mixed Methods   269 be able to design research that is congruent with that understanding of mixed methods research and for making sure that the entire design is in keeping with that understanding. Therefore, being able to address (b) is a necessary prior condition to addressing point (c). When you have clear statements about (a)–(c) above, continue your writing by answering the following questions: d. e. f. g. h. What are the methods you will use in your design? Why those and not others? What priority and weight will each method be given, and why? What is the timing of using one method in relation to using the other method? How would choosing a design with different priority and timing affect the answers to your research questions enabled by the research design? Why is this alternative design not the one you have decided to employ? What will be mixed, when, and how in your study? Answering (d)–(h) will help you develop a research design for a mixed methods study that will not “decompose into two or more parallel studies” (Yin, 2006, p. 41). FURTHER READINGS Creamer, E. G. (2018) An introduction to fully integrated mixed methods research. SAGE. Creswell, J. W., & Plano Clark, V. L. (2018) Designing and conducting mixed methods research (3rd ed.). SAGE. Greene, J. C. (2007) Mixed methods in social inquiry. John Wiley. Morse, J. M., & Niehaus, L. (2009) Mixed method design: Principles and procedures. Left Coast Press. NOTES 1. We will use the terms mixed methods and mixed methods research throughout this chapter as overarching generic terms. When discussing specific understandings of mixed methods research that use a specific form of referring to mixed methods in line with those understandings, we will use that form of the term, for example, when the term mixed-method is used to reflect a specific understanding of mixed methods—see Snapshot 3 in this chapter. 2. Greene defines mental model this way: “[A] mental model is a set of assumptions, understandings, predispositions, and values and beliefs with which a social inquirer approaches his or her work. A mental model includes inquirer stances, values, beliefs, disciplinary understandings, past experiences, and practical wisdom” (Greene, 2007, p. 53). In other words, a mental model is an explanation of someone’s thinking about how something works in the real world, and can help set an approach to solving problems: Mental models are personal, internal representations of external reality that people use to interact with the world around them. They are constructed by individuals based on their unique life experiences, perceptions, and understandings of the world. Mental models are used to reason and make decisions and can be the basis of individual behaviors. They provide the mechanism through which new information is filtered and stored (Jones et al., 2011, p. 1).
270  Research Design 3. Creswell (2011) describes this as a “composite definition for mixed methods based on 19 definitions provided by 21 highly published mixed methods researchers” (p. 271). 4. Creswell explains: “Mixed methods further is not simply the collection of multiple forms of qualitative data (e.g., interviews and observations), nor the collection of multiple types of quantitative data (e.g., survey data, experimental data). It involves the collection, analysis and integration of both quantitative and qualitative data. . . . When multiple forms of qualitative data (or multiple forms of quantitative data) are collected, the term is multimethod research, not mixed methods research” (Creswell, 2015, p. 3). 5. Morse explains: “Some researchers who used mixed methods (note: pluralized with no hyphen) are actually referring to two research methods conducted simultaneously or sequentially—and although they use the same name (‘mixed methods’), they are actually conducting multiple methods. . . . Their way of doing mixed methods is considerably easier than doing a mixed-method designed study, because each method (or component) is complete (capable of being published alone)” (Morse, 2017, p. 4). 6. Hesse-Biber (2010b, p. 210) describes this as a “gold-rush mentality.” 7. For example, in 1989, Greene and her colleagues refer to mixed-method evaluation designs (Greene et al., 1989) but later work by Greene uses the form mixed methods. 8. For example, Creswell´s description in 2015 of mixed methods research “(a)s a field of methodology about 25 years old” (Creswell, 2015, p. 1). 9. While Morse, Cheek, and Clark (2018) identified these as reasons for introducing a qualitative component, either as a core or a supplemental component, into a mixed-method study, the same type of reasons apply to the introduction of quantitative component as a core or a supplemental component into a mixed-method study. Note also the terminology that they use (mixed-method), which gives you insight into what understanding of mixed methods they are employing. 10. However, as with the idea of mixed methods, you will soon find that different scholars use different terms when describing these dimensions or have different ideas or emphases related to them. While some of the alternate forms of words used on the surface may appear to be just different names for the same idea, this may not always be the case. Often these terms reflect different understandings of, or emphases about, aspects of mixed methods research. 11. See Snapshot 3 in the section “What is a mixed methods research approach” earlier in this chapter. 12. See previous discussion of this in Snapshot 3 of this chapter. Different understandings of mixed methods also result in different understandings of multiple methods. 13. You will remember from Chapter 4 that all researchers approach “the world with a set of ideas, a framework (theory, ontology) that specifies a set of questions (epistemology) that he or she then examines in specific ways (methodology, analysis)” (Denzin & Lincoln, 2005a, p. 21). This set of ideas, or inquiry paradigm, define “for inquirers what it is they are about, and what falls within and outside the limits of legitimate inquiry” (Guba & Lincoln, 1994, p. 108).
11 WHY KNOWING AND DECLARING YOUR RESEARCH DESIGN HAND MATTERS PURPOSE AND GOALS OF THE CHAPTER The purpose of this chapter is to round off, and consolidate, the key points we have made about the process of designing research throughout the various chapters of this book. To do this, we use the idea of declaring the cards in your research hand as the vehicle for the discussion in the chapter. When designing our research, we hold many possible types of research design–related “cards” in our hand. These are cards such as preferences we might have for, or assumptions we might make about, particular types of methods, methodologies, or research questions. These are assumptions such as what a “good” or “better” way of designing and conducting research is. Often, they are made on the basis of the disciplinary and methodological traditions we have come into contact with, or been trained in. Recognizing these preferences and assumptions, where they come from, and how they affect the choices and decisions we make when designing our research is important. Such preferences and assumptions affect the way we think about all aspects of our research design and influence the design related decisions we make when developing it. Knowing what these preferences and assumptions are, and what effect that they have had on our thinking, is central to being able to justify what we did, and why, when designing our research. Therefore, the choices and decisions that you make, and which collectively make up your research design, need to be declared. They also need to be justified in terms of establishing the credibility of that design. To do this requires a great deal of careful reflexive thinking work on your part. However, what these decisions were, and what the basis for them was, is often missing from reports of research. It is hard to find examples of the form such reporting might take. Therefore, the chapter has a practical edge to it. It explores how you might actually write about, declare, and justify the decisions you made that gave rise to your research design. Or put another way, capture the thinking that gave rise to it. At the end of the chapter, because it is the final chapter in the book, we provide two sets of related conclusions which differ in their level of focus. The first set rounds off our discussion in this chapter about the idea of declaring your research hand and how this is linked to establishing the credibility of your research design. The second set takes the form of overall key points that have emerged about designing research throughout the entire book. Read in conjunction with the key point summaries at the end of each of the individual chapters in the book, this list of overarching key points provides you with a good overview of what we have covered in the book. 271
272  Research Design The red thread holding the chapter (and the book) together is the type of thinking you will need to do throughout the entire process of designing your research, as well as when putting that design into practice and when reporting the findings that the design enabled. The goals of this chapter are to • Highlight that the choices and decisions that collectively make up your research design need to be declared. • Establish that like actual playing cards, in themselves these research design– related cards are neither necessarily good or bad, or more or less suitable for a research design. Rather, it is what they are used for, and how they fit with each of the other cards in your research design hand, that makes them more or less suitable for a particular research design. • Demonstrate that it is the combination of, and connections between, these metaphorical research cards that produces your research design, for example, connections between the purpose of the research, the combination of research related cards you hold in your hand, and how that combination will enable that purpose to be achieved. • Point out that we have collected the cards in our hand from our experiences when interacting with the disciplinary and methodological traditions we have come into contact with or been trained in. • Emphasize that when designing research, we will need to expose and revisit the beliefs and assumptions we hold both about research, and designing research, that we bring with us to the process of designing our research. • Establish that designing credible research, no matter what research approach is taken, involves knowing and declaring what the effect of such choices are on both the overall design of the research and the results of that research. • Illustrate that to declare and justify your research hand will require reflexive consideration and acknowledgment of the choices you have made and the effect that those choices have had on your research design. In other words, you will need to declare what these decisions are and justify them. • Provide practical tips for how you might think through, identify, and declare your own research hand when designing, and writing about, your own research study. • Revisit and highlight that a central theme in the book has been to emphasize that designing research involves thinking through a dynamic and complex web of interconnected decisions. In other words, designing research is a process driven by a way of reflexive thinking that requires us to constantly revisit and assess our decisions when developing the various aspects of our research design. • Illustrate that this is a type of thinking that requires us to move forward and backward through our developing research design to continuously revisit, review, revise if necessary, and then build on the decisions we have made about that design.
Chapter 11 • Why Knowing and Declaring Your Research Design Hand Matters 273 • Conclude, while at the same time keeping open, our extended discussion about designing research by providing a summary of overarching key points that have emerged throughout the book. KNOWING AND DECLARING WHAT YOUR RESEARCH DESIGN–RELATED HAND IS Think of yourself pulling up a chair at a card table to join a group of people in a game of cards. During that game, you will be dealt a number of cards. At various times, you will think about these cards as either “good” or “bad” cards. Now think about what makes you able to say that. It is not the cards themselves—after all, a card that has the number 2 on it is a card with a 2 on it, no more or no less. Rather, it is what that card with the number 2 on it can be used to do at a particular point in a game that makes it either a good or bad card. What it can be used for depends on the rules of the card game that is being played. In some games the card that is a 2 might be a key card, in others not at all.1 However, what also makes the card with the number 2 on it a good card, or a not-sogood one, is what the other cards are that you have in your hand. How does the 2 fit with the rest of these cards that you have in your hand? For example, if the rules of the card game require you to have a numerical run of cards, is the 2 the missing card you need in terms of being able to put your hand down on the table, go out, and win the hand? Or is it a card that stops you from being able to go out because it doesn’t give you the combination of cards you need to do so?
274  Research Design In other games having a winning hand might involve doing something else, for example having four cards all of which are spades. Then the 2 of spades is a very useful card because it is a spade, not because it is a 2. Therefore, it is the purpose of the card game that you are playing that determines which cards, or combinations of cards, you will need to collect in order to have a winning hand. It is how, in light of that purpose, a card can be used individually or in combination with the other cards in your hand, that makes that card “good” or “bad.” In one game, a particular card can be a “good” card, in another game the same card can be a “bad” one. The point we are making here is that no card is innately better than another is. Rather, it is how they are played, both individually and in combination, related to the purpose of the game in the specific hand being played, that makes an individual card a better card than another one, or another card better than it. Therefore, it is that hand- or game-specific purpose that will lead you to either keep or discard cards. Pulling Up a Chair at the Research Design Table Now, holding in your head this image of players sitting around a card table with hands full of cards considering how they can, or might, use those cards, think of a research design table. This is a metaphorical table where combinations of scholars, researchers, or those with an interest in research can pull up a chair and enter into dialogue2 about the process of designing research. Those sitting at this table are encouraged to reveal and share their ideas about aspects of research design. There is room for everyone at this table. When you pull up a chair at this metaphorical table of research design, there are many possible types of research design–related “cards” that you may already hold in your hand. These are cards such as preferences you might have for particular types of methods, methodologies, research questions, as well as the assumptions about knowledge, research, and science that these preferences are connected to, and derived from. You have collected these cards from your experiences when interacting with the disciplinary and methodological traditions you have come into contact with or been trained in. The cards you hold are related to your view about • paradigms and onto-epistemological assumptions and their effect on the way research, and consequently research design, are thought about and enacted (Chapter 4); • ethical considerations that permeate the entire research design process (Chapter 2); • what constitutes a research problem and what form that problem and associated questions might (even must) take (Chapter 3); and • what methodologies and methods can be used to address that problem (Chapters 5−10). Like actual playing cards, in themselves these metaphorical research design–related cards are neither good or bad. For example, as we saw in Chapters 5 through 10, particular types of methods such as qualitative interviews, quantitative surveys, and forms of mixed methods research, are neither more or less suitable for a research design in themselves.
Chapter 11 • Why Knowing and Declaring Your Research Design Hand Matters   275 Rather, it is what they are used for, and how they fit with each of the other cards in your research design hand (e.g., your research questions, your ethical considerations when putting a method into practice) that makes them more or less suitable for a particular research design. Therefore, how each card relates to each of the other cards in your hand in light of the purposes of the research will determine which of these metaphorical research design– related cards you will keep, and which you will discard. It is the combination of, and connections between, these metaphorical research cards that makes up your research design. Just as in a card game no one card in your hand can be viewed in isolation from either the purpose of the game, or the other cards that you hold, when designing research no one aspect of that process or choices you make about that aspect can be viewed in isolation from the purpose of the research and the overall research design. The following box develops this point further. PUTTING IT INTO PRACTICE IT IS THE COMBINATION THAT MATTERS In this box, we use an insightful and wise observation from Ian Greener (2011) to demonstrate the key point we have been making—namely that it is the combination of metaphorical research design–related cards viewed in light of the purpose of the research that matters when designing research. The discussion here takes the form of us having a dialogue with Greener’s observation. The italics are our comments on the points he makes. [D]ifferent methods have very different underlying philosophies, and it is important to have an understanding of these differences, not because it will allow you to show how clever you are, but because it can help you to design social research so that it is more coherent and so has a better chance of success. (Greener, 2011, p. 21) Or for the purposes of the discussion here, it is important to understand what cards go with each other (i.e., methods with methodologies, and methodologies with ways of viewing knowledge) to form a hand that is a “good” one in terms of being made up of cards that together will enable you to achieve the purpose that the research is being designed for. Knowing all the philosophy in the world can’t guarantee that you do a good project. (Greener, 2011, p. 21) Nor does knowing all about specific methods or being a brilliant exponent of techniques and procedures related to those methods guarantee that you do a good project. It might, but it depends. Just as having an ace, for example, in your hand in a card game doesn’t guarantee that you have a good hand of cards as it depends on what that ace can be used for which in turn depends on the purpose of the game and the other cards that you hold, so having a particular method in your research design hand doesn’t guarantee that your research design will be a good one. It depends on what the purpose of that research design is and the other aspects that make up that design whether or not that method is a good choice. But it can prevent you from making mistakes, such as claiming that you are going to prove or disprove a theory by using a method which simply doesn’t offer this as a possibility. (Greener, 2011, p. 21) This is why it is important to know what the purpose of the research you are designing is, and then make sure that you collect the research related cards you will need to enable
276  Research Design the possibility for that purpose to be met. Prematurely shutting down your thinking about methods or the purpose of research, and at times not even realizing or declaring that you have done so, may mean that you simply do not have the means available to address the problem that your research is being designed for. This is the equivalent of if your card game requires jokers and you remove them from the pack (as you might do in some other card games with different rules), then you simply do not have the possibility of achieving the purpose of the game if it requires the jokers to remain in the pack. As it would be odd to try and generate an in-depth understanding of a social setting using an experimental design (odd, but not impossible), it would be odd to try and test a theory using qualitative data because words, videos or other recordings, the primary forms of qualitative data, can’t really be used to prove or disprove anything. (Greener, 2011, p. 21) It is (1) the purpose of the research and (2) the combination of metaphorical research related design cards that you are holding in your hand, and (3) the congruency and consistency between them that matters. Does that combination enable you to achieve the aim of the research? If it does not, then you may need to discard some of those cards (e.g., methods that will not enable you to address your research problem) and replace them with others that will give you the combination of cards that does enable you to achieve the aim of the research. Putting All of Your Cards on the Table and Declaring Your Hand: An Important Part of Research Design In any card game, at some point you will have to put the cards you have been collecting on the table and declare your hand. This is so that the others at the table are able to evaluate the worth of that hand in terms of the understandings and purpose of the card game being played. In the same way, when designing research, after you have thought through and made your decisions about the various aspects of your research design, you will need to show and declare that hand. This involves declaring the decisions that you have made when designing and thinking through that research. It is also about being able to justify those decisions. To declare and justify your research hand will require reflexive consideration and acknowledgment of the choices you have made, and the effect that those choices have had on your research design. At times, this might take your thinking into areas that, even though they may have profoundly affected the shape that a research design has taken, often remain undeclared and therefore invisible. For example, putting all of your cards on the table when declaring your hand may include revealing cards related to the politics of research such as the effect that trying to obtain funding for the research had on the research design.3 Was, for example, your study design influenced by considerations such as the preferred method of a particular funder of research or dissertation committee? DECLARING YOUR HAND: MISSING IN ACTION IN MUCH OF THE REPORTING OF RESEARCH The previous discussion has emphasized the importance of recognizing, and then declaring, your hand in relation to both the decisions that you make when designing your research and the reasons for those decisions. Yet, this is something that is not often done
Chapter 11 • Why Knowing and Declaring Your Research Design Hand Matters   277 in reports of research. What one of us noticed several decades ago remains the norm. This is that “[r]arely, if ever, are research proposals themselves published so that the conceptual development of the research project can be explored” (Cheek, 2000b, p. 67). In other words, the thinking that shaped the design and its development of the research being reported is rarely, if ever, declared or published. Consequently, in reports of research we are usually only presented with the end point of the research design process—what the research design became or finished up as. How that design became what it did remains invisible or undeclared. This can be for a number of reasons ranging from word count restrictions by journals limiting what can be reported in an article, through to researchers assuming shared understandings of what a research hand is (or can be). However, no matter what the reason, the effect is the same. Part of the researcher’s hand is missing and not declared. This is because we join the conversation about that research and its design partway through. We begin reading about the research when the design is set and the research has been conducted. The part of the conversation that we are missing is what happened before that. What was the thinking that led to the research being designed and conducted in the way that it was? For example, many published papers reporting forms of “quantitative” research present the results of that research by referring to findings in the form of lists of outcomes from statistical analyses. Most often such reports omit to discuss how, where, and why the data were collected, and on what basis those decisions were made. This is despite the fact, as we discussed in Chapters 8 and 9, that these decisions affect what data were collected and consequently what analyses were able to be done (or not done). This includes the choices made about how concepts or constructs to be measured were defined, what was deemed to be important about those concepts to measure and why, as well as what tradeoffs needed to be made when designing the research. Tradeoffs could include needing to use a sample of participants because of feasibility considerations, or using convenience sampling while still aiming to say something about the entire study population when employing quantitative approaches to research (Chapters 8 and 9). Similarly, when reading reports of qualitative research, we might read that six focus groups were conducted and then move onto reading about the findings of the research based on interpreting the information gained from those focus groups. What we don’t often read is why focus groups were chosen to be part of the research design and what was gained and lost as a result of making that choice, for example, explaining why choosing to use focus groups because the interaction in the group was deemed of greater importance in terms of achieving the purpose of the research than the depth of information able to be obtained from each individual participant. What Happened Along the Way? Another part of the researcher’s hand often missing from reports of their research is what happened along the way when the design was actually being developed, and then put into action. What iterations did that design go through before the researcher landed on its final form? Did what happened when putting the research into practice affect the final form that the research design took? If so, how and why? Things happen and change when doing your research. For example, unexpected events such as recruitment difficulties may lead to further decisions related to the design of the research having to be made. This happens in all research approaches—even the most quantitative and seemingly objective ones. For as Becker noted,
278  Research Design Sociologists of science . . . have shown us how natural scientists work in ways never mentioned in their formal statements of method, hiding “shop floor practice”—what scientists really do—in the formal way they talk about what they do. (Becker, 1998, p. 5) This missing or hidden part of the research design conversation matters. Designing credible research, no matter what research approach is taken, involves knowing and declaring what the effect of such choices, and any changes related to them, are on both the overall design of the research and the results of that research. It also involves knowing and declaring that research design considerations do not end once you have developed your initial design that will guide your study. In the next section of the chapter, we put the spotlight on how you might think through, identify, and declare your own research hand when designing, and writing about, your own research study. TIP LEARNING FROM OTHERS Although what happened along the way when the design was actually being put into action is often missing when that research is reported, there are some good examples of such reflexive reporting. We can learn much from these researchers and scholars’ description and reflexive analyses of how they thought through, and then declared, their research design–related hand when reporting or writing their research. Examples of this type of writing include the following: 1. The article we referred to in Chapter 10 by Cheek et al. (2015) that was written to deliberately focus on this often absent and therefore hidden part of the conversation about designing research—the thinking that “is not made explicit, and therefore remains hidden, in most research reports” (p. 754). The team of researchers considered that exposing and exploring this missing part of most research design– related conversations constituted a “‘second’ set of findings” (p. 754) for their study. These were findings about the dynamic and reflexive thinking that led to the research being designed and conducted in that way that it was—findings that provide an otherwise missing part of the conceptual context (Maxwell, 1996) for both the research design itself, as well as for the empirical findings that emerged from that study. See Cheek, J., Lipschitz, D. L., Abrams, E. M., Vago, D. R., & Nakamura, Y. (2015). Dynamic Reflexivity in Action: An Armchair Walkthrough of a Qualitatively Driven Mixed-Method and Multiple Methods Study of Mindfulness Training in Schoolchildren. Qualitative Health Research, 25(6), 751–762. 2. Elaine Demps’s (2013) reflexive account of the rationale for why she chose to use what she calls “the blurred genre of interpretive critical inquiry” (Lincoln & Guba, 2013, p. 90) in her doctoral study. This reflexive account is a very clear and helpful example of declaring one’s hand with respect to the decisions made, and the reasons for them, when developing a research design. Her account provides a practical and thoughtful example of what a study might look like as a result of these decisions and, more importantly, why that is so. See Demps, E. L. (2013). Excerpts from Elaine Demps: Understanding the Faculty Experience of Teaching Using Educational Technology. In Y. S. Lincoln & E. G. Guba (Eds.), The Constructivist Credo (pp. 83–198). Left Coast Press.
Chapter 11 • Why Knowing and Declaring Your Research Design Hand Matters   279 3. The idea of “showing your warts” (Gernsbacher, 2018, p. 403) when designing your quantitative research and acknowledging what you actually did. For example, If experiments were conducted in an order different from the reported order, state that. If participants participated in more than one study, state that. If measures were recalculated, stimuli were refashioned, procedures were reconfigured, variables were dropped, items were modified—if anything transgressed the prespecified plan and approach—state that. (p. 405) See Gernsbacher, M. A. (2018). Writing Empirical Articles: Transparency, Reproducibility, Clarity, and Memorability. Advances in Methods and Practices in Psychological Science, 1(3), 403–414. HOW TO DECLARE YOUR RESEARCH HAND: CIRCLING BACK TO TELL THE STORY OF THE DESIGNING OF YOUR RESEARCH Declaring your research hand involves “telling the story of the project” (Schostak, 2002, p. 228). Such telling is about what the research was about and why. It is about the choices that arose along the way when designing that research and the decisions that were made about them. It is about changes to the design that were made during the design process, what they were and why they were made. Schostak likens this to “a return to the beginning, a circling back like a bird of prey” (Schostak, 2002, p. 228).4 Such circling back involves a great deal of reflexivity or folding back on what you have done. This is because in order to be able to tell the story of the project, you need to know what that story is. In order to know what the story is, you will need to circle back on the series of interconnected decisions you have made which has produced the shape of your research design, and which therefore make up the story of your research design. This means that a starting point for declaring your research design–related hand when writing about and presenting the final form that your research design takes is to think through your research design and the process that produced it. Thinking through your process of producing that design enables you to identify the decisions that you made about, and which underpin, that design. However, just identifying what those decisions were, in itself, is not sufficient in terms of declaring your hand. Once you have identified what the decisions were, you will also need to circle back on those decisions and think through your reasons for making them. Why did you make them, based on what? To do this will require you to expose and revisit the beliefs and assumptions about both research, and designing research, that you have brought with you to the research design table. These are the beliefs and assumptions on which your decisions are based. Or, put another way, these are the beliefs and assumptions that have produced the combination of research design–related cards that you now hold in your hand and are declaring. This is why Koro-Ljungberg (2016) suggests that “[r]esearchers should ask themselves . . . Why are they drawn to a particular set of beliefs? What are labels such as ‘paradigms,’ ‘reflexivity,’ or ‘triangulation’ expected to signify? What do particular labels do? How do they operate? Who might gain from the use of these methods?” (p. 14).
280  Research Design To think through, and tell, your research design story in this way requires a form of thinking out aloud to yourself when designing your research underpinned by forms of the “if . . . then . . .” thinking (Morse, 2017, p. 48) and questioning that we have alluded to previously throughout various parts of the book. Identifying these choices and the effects that they have had on your thinking about, and the subsequent form of, your research design is part of declaring your hand. Such circling back on your thinking is part of what Guyotte and Kuntz (2018) describe as asking students in their research classes “to give language to how they come to know and become, thus opening the possibility for knowing/being otherwise” (p. 257). Telling the story of the process of designing your research in this way informs readers of your research about the assumptions, and decisions related to them, that constitute your research hand. In so doing, it gives a reader the information that they will need, and sometimes might otherwise not have, in order to make informed judgments about both the way that the research was designed and enacted, as well as the veracity of the conclusions reached as a result (Schostak, 2002). It also has the very useful effect of raising awareness and informed thinking among researchers from different traditions about the various ways that research can be designed, and what to look for when judging or making decisions about research from those different traditions. This is important, for as Janesick (2008), commenting on Egon Guba’s contribution to raising awareness of, and understanding about, qualitative inquiry, notes, Guba “pointed out how many at the time did not understand qualitative methods because so many qualitative researchers forgot to carefully document what they did as researchers” (Janesick, 2008, p. 565). Such an observation, of course it can be argued, applies equally well to mixed methods research and also forms of quantitative inquiry as well. Reflexively thinking forward and backward through the process of designing research enables you to consider questions such as these: Are the choices you have made about which methods you will use, and the way that you will use them, consistent with the choices you have made about methodology and the onto-epistemological assumptions those methodologies make? How are all these choices connected to the research problem that your research is being designed to address? How in turn does this constellation of choices interconnect with the theoretical and empirical constructs that the design is being built on? How does every decision that you make at every point of the process of designing research and every set of connections arising from those decisions reflect an emphasis on ethical and responsible thinking when designing your research, for example, connections such as what type of data you collect, what about, who from, and how? This type of questioning, and the story that we tell as a result of it, is central to the credibility of our research design and enables us to remain “faithful to how we practice research and scholarship” (Preissle & deMarrais, 2011, p. 32). Activity Try This: What Happens When One of Your Cards Changes? Drawing on the metaphor of the research design table and research design–related cards, try to work out what research cards you bring to the table. These cards are the ways that you think about, for example, what a research question is and the form it
Chapter 11 • Why Knowing and Declaring Your Research Design Hand Matters   281 takes, what a research method is (and is not), what form research data takes, and what analyzing that data means. Now try this. Discard the card that you are holding about what a research question is and the form it takes. Mentally pick up another card that has a different view of what a research question is and the form it takes. What does this do to all the other cards in your hand? How to Reflexively Circle Back to Tell the Story of Your Project When Reporting on Your Research You may be thinking to yourself at this point that while this all seems well and good, the issue remains how you might actually go about telling this story. How can you do this? One possible way of doing this is offered by Schostak (2002). He provides a useful guide in the form of a series of statements for you to think or write about that can help you both think through the types of decisions indicated above that make up your research design, and the reasons and effects of those decisions. Although his focus was on telling the story of the project when writing up a thesis, these types of tips can be used equally well to tell the story of the process of designing a research project. It provides a great example of reflexive thinking in action. The statements are as follows: 1. This is what I intended as aims and objectives, these were my beliefs that constructed my initial rationale. 2. These were how I conceptualized the structures, resources, mechanisms that framed my thinking and that of others. 3. This is how I conceived the structure of problems and opportunities that I faced at the outset. 4. And this was my initial methodology and chosen methods to obtain the data that I wanted. However, after an exploration of the literature, or experience in the field of some combination of [sic] both I discovered: 5. More about my and others’ ethical, political and so on value positions. 6. I discovered more about the implications of my theoretical frameworks. 7. I explored alternative methodologies, philosophies and theories. And these are discussed in Chapter ‘X’. Thus I needed to reframe any project in the following ways: 8. This led me to refine my data collection etc. according to new or modified rationales. 9. Which in turn led to innovative approaches to . . . , forms of representation of . . . and theories or models of . . . and facts or information, or findings concerning . . . And what I finally learnt from reflecting back on all these substantive, theoretical and methodological investigations and experiences was . . .
282  Research Design The story creates an air of continuous critical reflection, debate and ethical soundness, and focuses on the agenda of concerns the writer wishes to emphasize. (Schostak, 2002, pp. 228-229) Activity Circling Back to Some Activities From Chapter 1 In terms of thinking about how to declare your hand and tell the story of your project when reporting your research, you will find it useful to circle back to the two activities at the end of Chapter 1 to help you get a sense of how to reflexively think through and tell the story of how your research design came to be the way it is. In Activity 1, we asked you to obtain a report outlining the findings of a research study and look at the level of detail about the way the research was designed, and what was discussed, and what was not. We provided a series of points for you to think through related to that report. Now we want you to think about how these points could also guide you when writing about how your research design, and your research, came to be the way it is. Next revisit Activity 2 at the end of Chapter 1. Here we asked you, if you were in the process of designing your research, to journal the decisions you make during that process, and why you make them. We also asked you to continue this type of journaling at various points in the book.5 If you have done this, collect all the pieces of journaling that you have done and put them together to form the basis for developing a reflexive account of your thinking during the process of designing your research. The type of thinking that these activities require of you is the type of thinking you will need to do when declaring your hand. CAPTURING ALL THIS IN A DIAGRAM OF SOME SORT From the outset of this book-length conversation6 about research design, we have made it clear that our focus is on the thinking that occurs when we are designing our research. This is a thinking that affects the choices and decisions we make in the process of developing our research design. Based on the many conversations we have had about this type of thinking throughout the book, we thought that it could be useful at this point for readers if we developed a diagram that captured the iterative process of research design development that has been the focus of this chapter and the entire book. However, as we began to develop our diagram, we found ourselves facing the same problem that others who have tried to develop this sort of diagram have had. This problem is how to summarize and capture in a static diagram the iterative process of designing research—a process that does not keep still and which requires us to constantly think backward and forward between the various areas of that design. And how to stop any diagram produced from reverting to ones simply about research procedures and not research design? As a result, we began to question if it is possible to capture all of the areas, and aspects of those areas, related to designing research in a succinct and useful
Chapter 11 • Why Knowing and Declaring Your Research Design Hand Matters   283 diagrammatic summary of some sort that does not strip the representation of a research design of the thinking that underpins it. Developing, and Then Diagramming, an Overview of the Process of Research Design When we worked through the above questions, we realized that we needed to keep the emphasis on the thinking that shapes the way our research is designed when developing this diagram. After many attempts to do so, the result was Figure 11.1 below that we have called Research Design: Putting All the Thinking Together to reflect this emphasis. In this diagram, double-headed arrows are used across all the interconnected areas that make up a research design in order to capture the type of reflexive thinking underpinning designing research as an iterative process. Responsibility, ethics, and reflexivity frame and permeate the entire research design process. In this figure, we have tried to remove any impression of step-by-step linearity in the design process. Thinking about what the research is being designed to do or be used for is centered in this visual presentation and shown as constantly connecting and reconnecting with all other areas—none of which stand alone. The multiple reflexive conversations FIGURE 11.1 ■ Ethics Research Design: Putting All the Thinking Together Making explicit methodological assumptions about what type of knowledge will be produced what type of methods will enable the production of that type of knowledge Making explicit theoretical assumptions about what theory is and is for specific theoretical concepts in use Some sort of problem/hunch Substantive research area Research question/hypothesis about methodological matters about specific methods for data collection and data analysis Using the literature to join conversations Responsibility about theoretical and conceptual writing relevant to your study about empirical data-based studies related to substantive focus of your study to compare the findings of your study to existing empirical and theoretical work Reflexivity
284  Research Design with different types of literature that occur throughout the process of designing research are represented, as are the connections between those literatures. We have used double arrows in and out of the circle “Using the literature to join conversations” to reflect that working with the literature related to any area of the research affects our thinking about that area, and in turn, that thinking may lead to us consulting further or different literature as our thinking and ideas develop. Imperfect as it may be, this visual presentation is a useful one. This is not because it is a simple, easy to understand presentation. Quite the contrary! We think that this visual presentation is useful because it captures, and thereby exposes, the complexity of designing research. The Importance of If . . . Then Thinking The description of how we developed this diagram and why it takes the shape that it does reinforces a central theme running through this book. This is that designing research involves a lot of “If . . . then . . .” thinking (Morse, 2017, p. 48), thinking such as, If I do this . . . then . . .; or Because I have done this . . . then . . .; or I did this but I know that another way I could have done this was . . ., or consequently I will be able to say this as a result of my research, but I will not be able to say this because of the choices I have made. As we have discussed at numerous points in the book, reflexivity is a process in which you will revisit, think through, challenge, and refine the decisions you make throughout the entire research design process. It is an important part of the dynamic process of designing your research. Turning your gaze on yourself, and the decisions that you make when developing your research design, brings into focus what Maxwell (1996) termed the “conceptual context” (p. 25) of your research: the “goals, experiences, knowledge, assumptions, and theory you bring to the study and incorporate in the design” (p. 6). For example, how your background and assumptions about what scientific research is, and ways it can, should, or must be done, affected the choices you made when designing your research. It can expose things that you may have been taking for granted—for example, that this is the only way that this could be studied, or that one method is necessarily better than another. To think through, and then declare your hand in this way, relies on, and therefore requires, the dynamic reflexivity7 that is central to the process of designing research. This applies for all types of research design. Remember, “all research is based on the researcher’s basic set of beliefs that guide action” (Janesick, 2008, p. 566). TIP USING OUR DISCUSSION TO HELP YOU WHEN YOU ARE CONSIDERING DRAWING A DIAGRAM OF YOUR RESEARCH DESIGN Our discussion of what we need to think about when trying to capture and summarize our discussion of research design in a diagram provides a useful guide for some of the things you will need to think about when providing some sort of summary or overview of both your research design and the process of designing that research. Such an overview may take the form of either a diagram or a text, or both text and a diagram. Asking such questions of yourself and your research design is central to iterative and reflexive research design development.
Chapter 11 • Why Knowing and Declaring Your Research Design Hand Matters   285 CONCLUSIONS When thinking about how to end this chapter, and in effect the book since this is its final chapter, we struggled. This is because our discussion has covered so much related to research design. How could we “round off” the discussion by providing an overview of the key points and messages that have underpinned the discussion but at the same time not give the impression that this wide-ranging discussion is able to be reduced to a series of key points—a type of dot point thinking that we have critiqued in the discussion in this book.8 In the end, we decided on two foci for this discussion. The first focus relates to rounding off, and drawing conclusions specifically related to, the discussion in this chapter. This has been a discussion about the importance of knowing and declaring the cards you hold in your research design–related hand. The second focus aims to round off the discussion we have had throughout the book. We do this by indicating not only the key points emerging from the discussion in this chapter, but also key points and messages about designing research that we have distilled from our book-length discussion. Focus 1: Rounding Off This Chapter The discussion in this chapter has been about recognizing the many possible types of research design–related “cards” we hold in our hand when designing research. These are cards such as particular types of methods, methodologies, research questions, as well as the assumptions about knowledge, research, and science that these preferences are connected to, and derived from. We collect (or discard) particular types of cards related to research design, such as the way a concept is understood or a preference for a particular way of doing research, from the disciplinary and methodological traditions we have come into contact with or been trained in. None of these cards, like actual playing cards, are innately better than any other card to have in your hand. It is what they can be used for, and how they fit with each of the other cards in your research design hand, that makes them more or less suitable to have in your hand when designing a specific piece of research. When we design our research, we need to know what these cards are, what the effects of having them in our hand when designing our research are, justify why we have chosen to keep these cards and discard others in light of the purpose of the research, and thereby declare and justify our research hand. Consequently, declaring your research hand is not just about what is in your research hand but why it is in it. Designing research requires us to think reflexively throughout the entire research design process. Adding an -ing to the word “design” emphasizes the active and engaged process that designing research is, and from which a specific research design emerges. Designing research involves thinking through a complex web of interconnected decisions. In other words, designing research is a process driven by a way of thinking that requires us to constantly revisit and assess our decisions when developing the various aspects of our research design. This enables you to declare and defend not just what is in that hand, but why it is in that hand. Declaring and defending your research hand is a process that requires reflexivity on the part of the researcher. Reflexivity is about adding an and why to the thinking we do and
286  Research Design the decisions we make about our design. When thinking reflexively, we continually ask ourselves questions about the decisions we have made about the emerging design in order to modify or confirm those decisions. The goal of asking these questions is to improve and refine the emerging research design. Such iterative and reflexive thinking is what lies at the heart of the process of designing research and is embedded in any research design produced as a result of that process. It is an important part of ethical and responsible research, and research design. This is the case no matter what research approach we are using. It is not an optional extra. Rather, reflexive thinking is “a way to examine the complete research process and a vital procedure for enhancing validity” (Lahman, 2018, p. 35) of all types of research—a point that we have emphasized many times in the book. Understanding this means that you are well on the way to becoming an informed, responsible, and thoughtful scholar, and therefore exponent, of designing research. We do not pretend that such thinking will be a straightforward or easy process. It will be challenging and messy at times and “more work than if you did things in a routine way that didn’t make you think at all” (Becker, 1998, p. 7). However, in the end you will save time and work as your research design will be well thought out, clear, and the decisions that make it up and give it its shape will be able to be justified. In other words, you will know and be able to declare and defend your research hand. Focus 2: Ending With Some Key Points About This Chapter and Also the Book as a Whole As we have done in each chapter of this book, we will end this chapter with a list of key points. However, because this is the last chapter of the book, these key points will serve two purposes. The first is to give an overview of the key points that emerge from this chapter. The second is to give an overview of key points about the idea of designing research that have emerged from the book. However, in doing so, we add the caveat that these key points and messages should not be read as sufficient or complete in themselves. Each of these key points and messages emerged from, and summarize, much thinking that sits behind them. It is that thinking and its effect on the process of designing research that makes them the key points and messages. This thinking has been the red thread that has run through all the chapters of the book. If you read and understand the thinking that gave rise to these key points, then you are well on your way to knowing what you will need to think, consider, and make decisions about when you are designing your research. SUMMARY OF KEY POINTS Key points arising from the discussion in this chapter, Chapter 11, are the following: • An important part of the dynamic and reflexive process of designing research is recognizing, thinking through, and declaring your research related hand when designing and writing about your research.
Chapter 11 • Why Knowing and Declaring Your Research Design Hand Matters   287 • This includes declaring the understandings, beliefs, and assumptions you have about research and bring with you when you design your research. • These understandings, beliefs, and assumptions provided a conceptual context (Maxwell, 1996) for your research design, and the way in which you went about designing it. • Reflexive thinking enables you to recognize, think through, and declare your research related hand. • Declaring your hand requires you to be aware that designing research is complex and not the same as selecting a research design. • During the research design process, you will constantly examine and reexamine every part of the overall design you are developing. • This may require you to rethink prior decisions you have made or ideas you have about that design—this is a necessary and key part of designing research, Succinctly summing up key points that can be distilled from the discussions in the 10 chapters about designing research that have preceded this one, would produce a list of points including the following: • Designing research is a dynamic process involving reflexive thinking focused on, and moving between, iterations of interrelated and interconnected theoretical, substantive, disciplinary, methodological, methods-related, and ethical considerations. • It is the decisions that we make about these considerations when designing our research that collectively make up what we term a research design. • Any research design is only as strong as the credibility of every one of those decisions and the thinking behind that decision. • Designing research involves developing a plan to guide (not constrain) a specific research project or undertaking. • The various parts of that research plan and how they relate to each other must make up a consistent and congruent whole. • Neither designing research, nor a research design, are synonymous with the selection of methods to be used in the research. • Methods, as well as diagrams, techniques, and procedures related to those methods, are part of a specific research design. However, they do not make sense in the context of that specific design and project unless they are viewed and understood in relation to the other facets of that design, for example, the purpose of the research and the type of knowledge required to address that purpose. • Responsible and reflexive research design is about moving across, and between, the thinking that connects and holds the areas that make up that design together, thereby giving sense and coherence to that design.
288  Research Design KEY RESEARCH-RELATED TERMS INTRODUCED IN THIS CHAPTER declaring your hand designing research SUPPLEMENTAL ACTIVITIES 1. Re-read the eight dot points about key points that can be distilled from the discussions in the 10 chapters about designing research. You will find them at the end of the conclusions section of this chapter. Now working alone or with others, discuss the following: a. b. c. The thinking that underpins each of these key points, and from which they have arisen Why we consider them to be key points Other key points that might be added and why FURTHER READINGS Becker, H. S. (1998). Tricks of the trade. How to think about your research while you’re doing it. The University of Chicago Press. Lumsden, K. (2019). Reflexivity: Theory, method, and practice. Routledge. NOTES 1. For example, in Canasta it would be a wild card and therefore very useful; in other games it might simply be a very low scoring card. 2. Hesse-Biber has used the idea of a “dialogue table” in her discussion of mixed methods (see Hesse-Biber, 2010a, p. 417) 3. See Cheek’s (2022) discussion of the effect of funding on mixed methods research design. 4. Schostak’s focus was on a student’s research thesis, but the ideas he puts forward apply equally well to a student’s or researcher’s research design (which of course is a central part of that thesis and the judgements made about its credibility). 5. See Activity 2 of Chapter 5, Activity 1 of Chapter 7, and Activity 1 of Chapter 9. 6. See for example the Preface and Chapter 1. 7. See section The Importance of Reflexive Thinking When Designing Research in Chapter 1 of this book for more about what reflexivity is and does. 8. See Chapter 5 where we critiqued the idea of dot point thinking.
GLOSSARY analysis of qualitative data. Analysis of data involves cleaning, ordering, transforming and modelling the data collected to produce useful insights and information. Data collection and analysis continue iteratively until trustworthy and credible interpretations of that qualitative data can be made. anonymity. Measures used by researchers to protect the identity of the participants in their study. In research design, anonymity often involves using pseudonyms when referring to participants or sites in the study. audit trail. The thinking and decision-making throughout the process of designing your research. Highlights design-related decisions and/or modifications made during this process—including when and why they were made. In this way, an audit trail builds the credibility and trustworthiness of the design (Lincoln & Guba, 1985). capitalization. Capitalization of notations related to components in a mixed methods study reflects the priority given to the components in relation to the overall logic of inquiry that is shaping the mixed methods study. The capitalized notation QUAL means that the overall study is qualitatively driven whereas the capitalized notation QUAN indicates that the overall study is quantitatively driven. The use of the noncapitalized qual or quan notation indicates that that component is a supplementary one. category. A grouping of segments of text (i.e., codes or parts of memos) that talk about/relates to a particular aspect of a line of inquiry or that are instances or examples of the same idea. This is a higher level of focus that captures what the codes in that category are instances of. codebook. A list of predefined codes, sometimes with content descriptions and brief examples, that the researcher refers to when reading the transcripts of interviews or field notes of observations. The researcher looks specifically for examples of those codes in the data. coding. The “active process of identifying, labeling and systemizing data as belonging to or repre- senting some type of phenomenon” (Tracy, 2020, p. 234). When coding, you will identify a segment of data, such as a word or a series of words in an interview transcript, and give that segment of data an initial label or code that captures what the segment of data is about or relates to. computer-assisted qualitative data analysis software (CAQDAS). A generic name for software designed to assist qualitative data analysis. While such software may help you “find, categorize, and retrieve data and texts more quickly than using a manual search” (Liamputtong, 2020, p. 269), it does not in itself analyze the data. confidentiality. Keeping a research participant’s identity anonymous is part of confidentiality. However, confidentiality is more than just not disclosing the name, identity, or identifying features of participants. It is also about the way that any data that a participant has provided, or is related to that participant, is shared or not shared, and with whom. construct (of a variable). A specific way of elaborating on, or understanding, an abstract concept (Black, 1999). When you decide upon a specific way of understanding a variable of interest, you are developing the construct for that variable that you will use in your quantitative study. construct validity. The degree to which a measurement instrument measures what it is supposed or expected to measure in a quantitative research design. constructivism. A paradigmatic stance in which the researcher does not seek to eliminate the subjective thoughts, feelings, and opinions of those being researched to concentrate only on specific objective “facts” and variables that must be controlled. Instead, the researcher actively seeks to understand “the complex world of lived experience from the point of view of those who live it” (Schwandt, 1994, p. 118). 289
290  Glossary content validity. The degree to which a measurement instrument used in a quantitative research design includes the measurement items necessary and sufficient to measure every element of the construct in question. core component. In a mixed-method study, the primary method in the study which “must be conducted to a standard of rigor, such that, if all else were to fail, it could be published alone” (Morse & Niehaus, 2009, p. 23). In other words, the core component can stand alone as a research method for the study. See also supplemental component. correlational approach. An approach in quantitative research design in which the researcher investigates the relationships between variables without controlling or manipulating any of them. Correlational approaches have the potential to address research questions about “what goes with what” (Oppenheim, 1992, p. 21) in a population. credibility of research. Whether or not research is considered credible (or even research at all) will be based on the basic beliefs that make up the paradigmatic stance of the researcher. Therefore, judgments about the credibility of research make no sense unless they are in relation to a particular worldview, or basic set of beliefs, about research. Therefore, to be credible the design of the research, and its conduct, must be in keeping with the paradigmatic stance underpinning it. Criteria for making judgments about the credibility of research do not automatically transfer from one paradigmatic stance to another. data. Individual items of information that reduce or “chunk” reality into manageable units (Bernard et al., 2017) able to be analyzed and interpreted to produce the findings or results of our study. What is “chunked” (i.e., what is considered data and why) and how it is “chunked” (i.e., what methods are used or not used to collect and analyze that data, and why) depends on the way that the idea of data itself is viewed and thought about. data condensation. In qualitative approaches refers to the “process of selecting, focusing, simplifying, abstracting, and/or transforming the data that appear in the full corpus (body) of written-up field notes, interview transcripts, documents, and other empirical materials” (Miles et al., 2014, p. 12). Data condensation is about making choices about what parts of the large body of data you have collected are relevant for addressing your research questions. declaring your research hand. Declaring the decisions that you have made when designing and thinking through your research is an important part of research design. It is also about being able to justify those decisions. To declare and justify your research hand will require reflexive consideration and acknowledgment of the choices you have made, and the effect that those choices have had on your research design. deductive reasoning. In research based on deductive reasoning, empirical evidence is collected and used to test an existing theory or a theoretically based assertion. The researcher begins by thinking about that theory, or aspects of that theory, and predicts what the empirical evidence—that is, the data collected—should show if that theory, or those aspects, are supported. descriptive quantitative approach. A descriptive quantitative approach seeks to collect and analyze numerical data to identify and numerically describe population characteristics. Such descriptive quantitative approaches have the potential to address research questions about what is going on in a study population in terms of how many, or how much, of something of interest is occurring within the population, without drawing inferences about cause and effect. designing research. Designing research requires us to think reflexively throughout the entire research design process. Adding an -ing to the word design emphasizes the active and engaged process that designing research is, and from which a specific research design emerges. diagramming. An active, dynamic and reflexive process of developing a diagram about, or representing in diagrammatic form, what you have thought through, and then made decisions about, when designing your mixed methods research study (Morse, 2017). The diagram produced is a static representation of this process of thinking and therefore cannot be fully understood removed or decontextualized from the process giving rise to it.
Glossary  291 discourse analysis. A research approach that seeks to study written or spoken language in relation to its social context. Discourse analysis is premised on the understanding of “language as a meaning constituting system which is both historically and socially situated” (Cheek & Rudge, 1994, p. 61). Therefore, the focus of analysis is the meaning constituting systems in texts generated from, for example, interviews, news articles, or visual texts such as pictures and films (Taylor, 2013). empirical. Empirical research is an evidence-based approach that draws conclusions from results derived from observations, experiments, interviews, or other forms of data derived from real-life situations. An empirical approach relies on real-world data, measurements, and results rather than theories and concepts. epistemology. The study of knowledge. Epistemological considerations focus on questions such as, What do we mean when we say that we know something? What enables us to claim that we know that? How does knowledge differ from opinion or belief? Epistemological considerations when designing our research focus on the type of knowledge we will need to produce to be able to reach credible or warranted conclusions or inferences. estimate. Used in quantitative research designs, an estimate for a population characteristic is the numerical value of the corresponding characteristic of a sample drawn from that population. ethics committees. Formally constituted committees, also known as Institutional Review Boards, or IRBs, that regulate ethical matters relating to research design and conduct. ethnography. A specialist qualitative research approach that seeks to understand the culture of the social context being studied. It is the focus of an ethnographic research approach on the theoretical idea of culture that makes the study ethnographic. Put another way “to be an ethnographic study, the lens of culture must be used to understand the phenomenon” (Merriam & Tisdell, 2016, p. 31) that is being studied. experimental/quasi-experimental approach. A quantitative research approach that manipulates variables under controlled conditions to identify cause-and-effect relationships. Experimental approaches have the potential to address research questions about why something happens. Quasi-experimental approaches likewise involve the manipulation of an independent variable but without the randomization that is central to experimental approaches. external validity. The extent to which the numerical based findings of a quantitative study using a sample can be generalized to a population beyond that sample. feasibility. Whether the research can be done within the time and resources that are available. focus group. A specific type of interview involving a group of people. “Focus groups are a research method that collects qualitative data through group discussions. This definition contains two components: first, the goal of generating data, and second, the reliance on interaction” (Morgan, 2019, p. 4). The reliance on the group interaction sets focus groups apart from individual interviews. generalizing in quantitative research. Arriving at conclusions about a study population based on numerical data from a sample drawn from that population. Generalization is enabled by establishing a relationship between what is going on in a sample and what is going on in the population from which that sample is drawn. grounded theory. A whole-of-study theoretical and methodological approach underpinning your research design. Uses strategies of inquiry designed to enable you to develop a theory able to address your research problem or provide answers for your research questions. The theory is grounded in, and arises from, the analysis and interpretation of the data collected— hence the name grounded theory. hypothesis. A statement about what we predict empirical evidence to reveal in a specific situation in a specific context (and in the social sciences often for a specific group of people). It is a prediction “about the nature of relationships between two or more variables expressed in the form of a testable statement” (O’Leary, 2017, p. 377) based on “an informed reading of the literature, a theory, or personal observations and experience” (Nardi, 2018, p. 48).
292  Glossary hypothesis testing. Testing an hypothesis means applying a procedure for statistically analyzing the research data designed to detect any inconsistency between the data collected about one or more variables and the hypothesis made about that (those) variable(s) if there is such an inconsistency to detect. hypothetico-deductive thinking. A type of thinking that begins with defining the general assumptions and understandings about how things work, and which variables are involved in a specific situation. From that understanding, a testable statement (a hypothesis) about how things work is deduced. That hypothesis is then tested by analyzing numerically data collected specifically to enable assessing whether the hypothesis is supported by that numerically based empirical evidence. individual interviews. One of the, if not the most, common way that data is collected in qualitative research approaches. This form of interview involves a one-to-one dialogue between an individual participant and the researcher. Therefore, you will see an individual interview sometimes referred to as an in-depth individual interview. inductive reasoning. In research based on inductive reasoning, empirical evidence is collected and used to build theoretical and empirical concepts and understandings. The researcher begins by collecting data and seeks to develop concepts and understandings about the area (rather than starting with a predetermined or a priori set of theoretical concepts to test). informed consent. Informed consent is when participants agree or consent to participate in the research. It means that the participant understands both that they are giving consent and what they are consenting to. Such consent relies on full disclosure by the researcher of what participating in the research involves. inquiry paradigms. Guba and Lincoln (1994) developed the idea of inquiry paradigms to make explicit the intersections between ontology, epistemology, methodology, and methods and how they affect what is studied, how it is studied, and the role that the researcher plays in that study. This includes “what falls within and outside the limits of legitimate inquiry” (Guba & Lincoln, 1994, p. 108). Institutional Review Boards (IRBs). See ethics committees. interview guide. A prepared list of the questions or areas that you will use to guide the discussion in an interview or focus group, including follow-up questions, prompts, interview probes. interview probes. Follow-up questions or statements used in qualitative research interviews designed to enable you to pursue new and interesting leads as the interview progresses or to clarify aspects of the answers given to the questions you have asked. iterative process. An iterative process involves cycles of thinking where you begin with an idea, think it through, and then revisit, refine, or change the initial idea in light of that thinking, then think that change through and so on. This continues until you have landed on a research design that you believe will enable you to complete your research in a credible, systematic, and well-thought-through way. This thinking also continues as you put that design into action. layers of consent. A strategy that can be used to address issues about consent related to the repurposing of data. Research participants giving informed consent for the primary study are also explicitly asked whether they consent to their data (1) only being used in that study, (2) being reused in later studies, (3) being archived or stored to be accessed by other researchers and reused in other studies. lines of inquiry. Used to guide qualitative interviews. These initial lines, or areas, of inquiry emerge from your thinking about what it is important for you to know more about to get an in-depth picture of what is going on in the situation that your research is focused on. Once you have identified your lines of inquiry, you can then develop interview questions related to each of them.
Glossary  293 measurement instrument. The complete set of measurement items designed to measure each of the variables that are identified as having the potential to enable you to answer your specific research question(s) make up the measurement instrument included in your quantitative research design. measurement item. A question, statement, or observation designed to measure a specific vari- able (or an aspect of a variable). memos. The preliminary and developing analytical notes or hunches written by a qualitative researcher during analysis of data. Memos can be used to describe what happened during data collection, thoughts about what is going on and/or questions, or thoughts about what you need to find out more about. Memos can also be used to highlight consistencies or inconsistencies in the data collected that require further elucidation or follow-up. methodology. The thinking that gives rise to the choice and use of particular methods in your research design. An example of a methodological consideration is whether or not a particular method will enable you to obtain the type of data needed to generate the type of knowledge needed to address your research question(s). methods. The techniques used to obtain and analyze research data. Each research related method has specific procedures associated with it that are designed to obtain a particular type of knowledge or information—usually referred to as data. The methods chosen must be consistent with the type of knowledge or data we want to obtain from our research (our methodological considerations), which in turn must be consistent with the nature of the research problem. mixed methods research approach. A research approach that integrates more than one method to provide a better understanding of the phenomenon being studied. Some definitions of mixed methods describe “a single study combining qualitative and quantitative research approaches” (e.g., Creswell, 2015) while others limit the definition to studies where one of the methods is incomplete (see supplemental component) and cannot stand alone (e.g., Morse & Cheek, 2014). mixing. In mixed methods design, the conceptual process of integrating ideas from different ways of thinking “into a meaningful pattern and a practically viable blueprint for generating better understanding of the social phenomena being investigated” (Greene, 2007, p 16). Mixing occurs at a number of levels, including overall project design (e.g., methods or components used), data collected (e.g., textual or numeric), and the analyses of that data (e.g., the findings of each component) (Yin, 2006). multiple methods/multimethod research. If mixed method research is defined as using both qualitative and quantitative methods, then research using more than one qualitative method, or more than one quantitative method—but not both—can be defined as multimethod. However, if mixed method research is defined as combining one complete method and one incomplete method, then a study using one complete qualitative method and one complete quantitative method would be considered a multiple methods/multimethod study. notation system. In mixed methods research, a notation system, developed in 1991 by Morse and subsequently expanded by others, is commonly used to describe or write about mixed methods and mixed methods research designs. For example, it uses the notations QUAL or qual to refer to components in that design that draw on qualitatively derived logics of inquiry. The notations QUAN or quan are used to refer to components that draw on quantitatively derived logics of inquiry. onto-epistemological assumptions. The stances on different ontological and epistemological assumptions about the nature of reality, and how we know what we know about that reality (Crotty, 1998) under a particular paradigmatic approach. Which onto-epistemological view we adopt affects the methodological decisions we make when we design our research. ontology. The study of “the nature of truth and reality and whether it is external or constructed” (Creamer, 2018, p. 43). It is about what exists and what can be considered real. There are different views or ontological positions about the nature of reality, for example, constructivism, positivism.
294  Glossary operational definition. The identification of quantifiable factors, or indicators, of a construct and specification of which quantifiable factors or indicators to include in the measurement of the variable. The operational definition of a variable, informed by the construct of that variable, enables collecting information that is relevant to learning something about that variable. paradigm. A type of world view or set of basic beliefs (Guba & Lincoln, 1994) that guides the thinking of, and therefore the decisions made by, researchers related to what a researcher might decide to study, how they will study it, and the way that they interpret and make conclusions about what they have studied. paradigmatic stance. The set of basic beliefs held by a researcher about what research and sci- ence are and how research can be designed and conducted in order to be scientific and therefore credible. point of interface. The point of interface in your mixed methods research design is where the components making up your research design are brought together and integrated in some way, usually (but not always) occurring “once the analysis of each component is completed” (Morse, 2017, p. 9). positivism. A dominant inquiry paradigm in western scientific thought for hundreds of years, positivism is based on a “conviction that scientific knowledge is both accurate and certain” (Crotty, 1998, p.27). The task of the researcher is to study the reality that already exists “out there” (Lincoln & Guba, 2013, p. 38). Consequently, positivism is premised on a realist ontological position. post-positivism. A paradigmatic stance under which research seeks to distinguish between beliefs and valid beliefs, rather than to produce absolute truths (Campbell & Russo, 1999). Whether a belief is valid is based on judgments about the validity of the empirical evidence it is based on and how that evidence was obtained and analyzed. Experimentation is retained as the basic methodological strategy, although it is conceded that experimentation “cannot produce ultimately infallible results” (Lincoln & Guba, 2013, p. 38). power of a hypothesis test. The ability of a hypothesis test to correctly claim support for the statement making up the hypothesis. priority or weighting of components. In a mixed methods study, priority (also referred as weighting) refers to the overall priority given to the components in the study in relation to the overall logic of inquiry. For example, is the overall study qualitatively or quantitatively driven in terms of its overriding methodological or philosophical emphasis? pseudonym. A fictitious name given by researchers to participants or sites in a research study. The pseudonym is used instead of their participants’ or sites’ real names when reporting and discussing the research to protect their anonymity. purposeful (or purposive) sampling. A type of sampling used in qualitative research in which the people, sites, or texts selected to make up our study sample are chosen because they are information rich in some way about a situation, event, context, site, or experience we are interested in knowing something more about—the purpose of the study. qualitative component (in a mixed methods study). In mixed methods research designs the notations QUAL or qual refer to components in that design that draw on qualitatively derived logics of inquiry. qualitative research approach. A research approach designed to enable the emergence of rich and qualitative interpretations of the perceptions or experiences of people about a specific aspect(s) of the everyday context(s) in which they exist. This type of approach is often referred to as naturalistic or interpretive inquiry. It uses nonnumerical data to derive in-depth understandings of people’s perceptions, beliefs, attitudes, and experiences about a situation or phenomenon. quantitative component (in a mixed methods study). In mixed methods research designs the notations QUAN or quan refer to components in that design that draw on quantitatively derived logics of inquiry.
Glossary  295 quantitative research approach. A research approaches that relies on statistical analysis of numeric data to identify, classify, and quantitatively describe characteristics of, and trends in, a given (large) group of people (i.e., a study population). The goal of the research is to quantify the presence of, and links between, objective measures. quantitative survey. A research method used to collect data from a group of respondents by way of asking people a set of predefined questions or asking them to respond to a set of predefined statements. The measurement instrument of a survey can be, but is not always, a questionnaire. quasi-mixed designs. Research designs claiming to be mixed methods studies in which “two types of data are collected, but there is little or no integration of findings and inferences from the study” (Teddlie & Tashakkori, 2011, p. 294.). random selection process. A probability-based process in which every member of the study population has an equal probability of being selected as part of the sample in a quantitatively driven research design. realism. The ontological view that the world exists and has an external objective reality independent of the perceptions of those living in that world (Schwandt, 2015). reflexivity/reflexive thinking. The quality or process of reflecting, folding, or bending back (Finlay & Gough, 2003) on our own thinking to work out why we have come to think about something in the way that we do. Reflexivity is important for identifying assumptions and challenging us to work out why we think what we do when designing our research, and whether there are other possible ways of thinking about that design. relativism. An ontological position according to which the world is understood as made up of socially constructed meanings. These everyday social interactions and constructions make up the way that the world is. The world does not exist independently of those in it. relevant literature. The body of knowledge built up by the work of others which is relevant to the various areas that make up your research design. This can be literature related to empirical work in the substantive area of your research, methodological and methods considerations, ethical and theoretical matters. Working with the literature is central to the iterative process that underpins the development of a research design. reliability. In quantitative research, the degree of consistency across multiple measures of the same construct, such as when the same measurement is conducted multiple times (test-retest reliability), when the same measurement is conducted by different people (interrater reliability), and consistency across individual items designed to measure the same construct in a single measurement instrument (internal consistency reliability). representative sample. A sample in quantitative research that accurately represents the study population according to specified criteria, such as the distribution of age, gender, and level of education in the population, as well as other criteria relevant to the research question(s) the study is designed to address. repurposing of data. Data from one study that is reused in a subsequent study or for an additional purpose. The original data collected for one study (or even data collected from several studies) can be used to undertake a form of secondary analysis to answer questions not part of the original study. research area. The broad initial topic(s) or wide area(s) of interest giving rise to the research. research design. The process by which a research idea is developed into a research project or plan that can then be carried out by a researcher or research team. research ethics. The considerations and principles surrounding moral behavior in research contexts (Wiles, 2013). Ethical issues often identified in relation to the design and conduct of research include respect for participants’ dignity and privacy, and issues of welfare and social justice more generally. Thinking through ethics at all points of the research design process is part of responsible research (Kuntz, 2015).
296  Glossary research problem. The aspect of the research topic or area in which we are interested and on which our research will focus. research question. A more specific formulation of the research problem. A research question is posed in such a way that it can be researched, and when addressed provides information able to contribute to the body of knowledge relevant to the research problem that the question is related to. response rate. The proportion of sampled individuals that are willing to provide information by responding to the measurement instrument that is part of a specific survey-based quantitative research study. rigor in qualitative research. The measures and processes put in place to ensure that the research is conducted appropriately. For example, one measure to improve rigor in qualitative inquiry is to maintain an audit trail, demonstrating that there is sufficient data of the right type to support any interpretations or conclusions made. sample size. In qualitative approaches this refers to the number of people interviewed, or observations made, or texts analyzed. In a qualitative study sample size considerations include how we know that we have collected and analyzed “enough” data of an appropriate type to be able to reach the conclusions we do at the end of that study. In quantitative approaches sample size refers to the number of members of the study population who provide the information that comprises the data for that quantitative study. In a quantitative study sample size considerations include are there enough members of that population to enable us to conduct the types of statistically and probability-based analyses needed to answer the questions and/or address our hypotheses. sample/study sample. A subset, or sample, drawn from the study population. The study sample comprises the members of the study population who provide the information that comprises the data of the study. sampling strategy. A procedure for selecting which members of the study population will be included in the sample. The logic underpinning this strategy varies depending on the approach being used in your research design e.g., whether the underlying logic is quantitatively or qualitatively driven. scientific method. A way of thinking about and doing research derived from the way research is done in the physical and natural sciences. semistructured interviews. An interview process used in qualitative research that gives some structure to what is to be talked about but does not dictate in what order, form, or how it must be talked about as a closed interview structure does. This enables you to follow up and probe unanticipated and interesting directions and areas that may arise during the interview. sensitizing concepts for observations in qualitative inquiry. When making observations in social settings, sensitizing concepts (Blumer, 1954, p. 7) can play a similar role to that played by lines of inquiry in qualitative interviews, providing “jumping-off points or lenses” (Tracy, 2020, p. 29) when collecting data using observations. social desirability. The tendency to depict oneself as conforming to social norms and common values, potentially leading to inaccurate responses in surveys or answers in interviews. standardized observations. Standardized observations are most often used in quantitative approaches to gain data about the frequency of particular designated activities, events, or actions that a researcher has decided are of interest. Observations often take the form of some sort of observational log of the frequency of the event of interest occurring. statistical validity. Conclusions drawn from a procedure for statistically analyzing the research data are statistically valid if the analysis procedure is appropriate for drawing that specific type of conclusion, and if the analysis is performed on a set of data that meets the requirements of that analysis procedure.
Glossary  297 statistically reasonable. A research design is statistically reasonable if it can be justified and defended in line with the rules of statistics. A statistically justifiable and defensible research design is necessary for the research to be credible. structured/closed interviews. A type of interview that uses an interview schedule where each research participant being interviewed is asked the same predetermined interview questions, using the same wording, in the same order. Usually there is a limited set of response categories for the person being interviewed to choose from. The aim in using a closed interview is to obtain some form of standardized data that can then be compared across large numbers of participants. Most often used in quantitative approaches. study population. The group of people about which you want to be able to say something when using a quantitative approach to address your research question(s). This study population is generally a (large) group of people, where every member meets a set of inclusion criteria relevant to the research question(s), and where no one meeting those criteria is excluded from that group. substantive area of research. Matters related to the research topic of interest itself (e.g., smok- ing, or leadership) rather than to theoretical or methodological aspects of the research. For example, the substantive literature that you might draw on when designing your research is literature relevant to your problem area to find out what others have and have not done (matters of substance) and how this might affect what you choose to focus on (or not focus on) in that area. supplemental component. In a mixed methods study, an incomplete component(s) designed to supplement the core component in some way. theory. A set of beliefs or concepts that explain aspects of the world. Ideas or concepts from specific theories provide orienting ideas that influence all aspects of the research design: the questions that are asked, the way data is collected, the analysis of that data, the interpretation of that analysis, and consequently the conclusions that can be drawn from the research. timing or pacing. The order, duration or intervals at which the various components that make up a mixed methods research design will be undertaken in the study. For example, will the components be conducted at the same time or will one method or type of collection of data follow the other? transcription. The process of converting a sound file into written text. While transcription may seem to be a simple exercise of converting spoken words to written words, during the process of transcribing an interview, you revisit what happened and become more familiar with both what was said and the interactions that led to it being said. In other words, you immerse yourself in that interview data and begin to analyze it. triangulation. The use of more than one data source, theory, or method to improve the credibil- ity and therefore trustworthiness of research. “Triangulation means that you take different perspectives on an issue in your study or in answering your research questions” (Flick, 2020, p. 187). The idea is that by using multiple sources or methods, fuller, richer, or more nuanced descriptions, analyses, and therefore interpretations of an issue are possible. trustworthiness. The degree to which research findings can be accepted and recognized as significant by the audience of the research. Lincoln and Guba (1985) define trustworthiness in qualitative inquiry as comprising the elements of rigor, credibility, transferability, dependability, and confirmability. typology. A system of grouping items into “types” according to their perceived characteristics or relationships. Typologies differ according to the basis of classification. For example, Greene et al. (1989) developed a typology of five categories of purpose for mixed methods research design—triangulation, complementarity, development, initiation, and expansion. unstructured/open interviews. Interviews that have a flexible and nonstandardized structure. Open-ended questions are often used in qualitative studies and the interview is more a conversation with the researcher picking up on, and probing in more depth, areas that the participant has chosen to talk about when answering those questions.
298  Glossary validity. Validity is a complex concept, but broadly defined, the validity of research results or findings refers to the degree to which they represent reality. In quantitative research, we can say that results or findings are valid when they are credible, well founded, reasonable, justifiable, and defensible. (See also construct validity; content validity) variable. Anything—a person, object, event, or relationship—that using a quantitively driven research design the researcher seeks to measure, manipulate, and control in order to produce data for analysis. The variables of a study are identified or determined by the researcher as the ones that have the potential to enable the researcher to answer a specific research question related to a study population of interest. vulnerable populations. A group is considered vulnerable if there is good reason to believe that individuals in that group may, for some reason, have difficulty providing free and informed consent to participate in research.                 
REFERENCES Agar, M. H. (2008). The professional stranger: An informal introduction to ethnography (2nd ed.). Emerald Group. Aksøy, H. (2009). Leading through constant change: What are the issues for nurse managers arising from changes to the home care services? Master’s thesis. Oslo University. Alabrese, E., Becker, S. O., Fetzer, T., & Novy, D. (2019). Who voted for Brexit? Individual and regional data combined. European Journal of Political Economy, 59, 132–150. https://doi. org/10.1016/j.ejpoleco.2018.08.002 Alvesson, M., & Sandberg, J. (2011). Generating research questions through problematization. Academy of Management Review, 36(2), 247–271. https://doi.org/10.5465/amr.2009.0188 Alvesson, M., & Sandberg, J. (2013). Constructing research questions: Doing interesting research. SAGE. American Psychological Association. (2019). Depression assessment instruments. https:// www.apa.org/depression-guideline/assessment Anderson, E. E., & Corneli, A. (2018). 100 questions (and answers) about research ethics. SAGE. Attia, M., & Edge, J. (2017). Be(com)ing a reflexive researcher: A developmental approach to research methodology. Open Review of Educational Research, 4(1), 33–45. https://doi.org/10.10 80/23265507.2017.1300068 Bailey, K. A., Dagenais, M., & Gammage, K. L. (2021). Is a picture worth a thousand words? Using photo-elicitation to study body image in middle-to-older age women with and without multiple sclerosis. Qualitative Health Research, 31(8), 1542–1554. https://doi. org/10.1177/10497323211014830 Baker, S. E., & Edwards, R. (2012). How many qualitative interviews is enough? Expert voices and early career reflections on sampling and cases in qualitative research (NCRM Methods Review Papers 19). National Centre for Research Methods. http://eprints.ncrm.ac.uk/2273/ Ballantyne, A. (2019). Adjusting the focus: A public health ethics approach to data research. Bioethics, 33(3), 357–366. https://doi.org/10.1111/bioe.12551 Beck, A. T., Steer, R. A., & Brown, G. K. (1996). Manual for the Beck Depression Inventory-II. Psychological Corporation. Beck, C. T. (2019). Secondary qualitative data analysis in the health and social sciences. Routledge. Becker, H. S. (1998). Tricks of the trade: How to think about your research while you’re doing it. The University of Chicago Press. Bernard, H. R., Wutich, A., & Ryan, G. W. (2017). Analyzing qualitative data: Systematic approaches (2nd ed.). SAGE. Bishop, L. (2009). Ethical sharing and reuse of qualitative data. Australian Journal of Social Issues, 44(3), 255–272. https://doi.org/10.1002/j.1839-4655.2009.tb00145.x Bjerknes, M. S., & Bjørk, I. T. (2012). Entry into nursing: An ethnographic study of newly qualified nurses taking on the nursing role in a hospital setting. Nursing Research and Practice, 1–7. https://doi.org/10.1155/2012/690348 Black, T. R. (1999). Doing quantitative research in the social sciences: An integrated approach to research design, measurement and statistics. SAGE. 299
300  References Blair, J., Czaja, R. F., & Blair, E. A. (2014). Designing surveys: A guide to decisions and procedures (3rd ed.). SAGE. Blee, K. M., & Currier, A. (2011). Ethics beyond the IRB: An introductory essay. Qualitative Sociology, 34, 401–413. https://doi.org/10.1007/s11133-011-9195-z Blumer, H. (1954). What is wrong with social theory? American Sociological Review, 19(1), 3–10. https://doi.org/10.2307/2088165 Boddy, C. R. (2016). Sample size for qualitative research. Qualitative Market Research, 19(4), 426–432. https://doi.org/10.1108/qmr-06-2016-0053 Boellstorff, T., Nardi, B., Pearse, C., & Taylor, T. L. (2012). Ethnography and virtual worlds: A handbook of method. Princeton University Press. Bogdan, R. C., & Biklen, S. K. (2011). Qualitative research for education: An introduction to theories and methods (5th ed.). Pearson. Bolin, A., & Granskog, J. (Eds.). (2003). Athletic intruders: Ethnographic research on women, culture, and exercise. State University of New York Press. Bors, D. (2018). Data analysis for the social sciences: Integrating theory and practice. SAGE. Braun, V., & Clarke, V. (2013). Successful qualitative research: A practical guide for beginners. SAGE. Bryman, A. (2006). Integrating quantitative and qualitative research: How is it done? Qualitative Research, 6(1), 97–113. https://doi.org/10.1177/1468794106058877 Bryman, A. (2016). Social research methods (5th ed.). Oxford University Press. Cadwalladr, C., & Graham-Harrison, E. (2018, March 17). Revealed: 50 million Facebook profiles harvested for Cambrige Analytica in major data breach. The Guardian. https://www.theg uardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election Campbell, D. T., & Russo, M. J. (1999). Social experimentation. SAGE. Carey, M. A., & Asbury, J. E. (2012). Focus group research. Left Coast Press. Cashin, A., Newman, C., Eason, M., Thorpe, A., & O’Discoll, C. (2010). An ethnographic study of forensic nursing culture in an Australian prison hospital. Journal of Psychiatric and Mental Health Nursing, 17(1), 39–45. https://doi.org/10.1111/j.1365-2850.2009.01476.x Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis. SAGE. Charmaz, K. (2014). Constructing grounded theory: A practical guide through qualitative analysis (2nd ed.). SAGE. Cheek, J. (2000a). An untold story? Doing funded qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 401–420). SAGE. Cheek, J. (2000b). Postmodern and poststructural approaches to nursing research. SAGE. Cheek, J. (2004). At the margins? Discourse analysis and qualitative research. Qualitative Health Research, 14(8), 1140–1150. https://doi.org/10.1177/1049732304266820 Cheek, J. (2008). The practice and politics of funded qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), Strategies of qualitative inquiry (3rd ed., pp. 45–74). SAGE. Cheek, J. (2010). Human rights, social justice, and qualitative research: Questions and hesitations about what we say about what we do. In N. K. Denzin & M. D. Giardina (Eds.), Qualitative inquiry and human rights (pp. 100–111). Left Coast Press. Cheek, J. (2017). Qualitative inquiry, research marketplaces, and neoliberalism: Adding some +s (pluses) to our thinking about the mess in which we find ourselves. In N. K. Denzin & M. D. Giardina (Eds.), Qualitative inquiry in neoliberal times (pp. 19–36). Routledge.
References  301 Cheek, J. (2018a). The BMJ debate and what it tells us about who says what, when and where, about our qualitative inquiry. In N. K. Denzin & M. D. Giardina (Eds.), Qualitative inquiry in the public sphere (pp. 50–65). Routledge. Cheek, J. (2018b). The marketization of research: Implications for qualitative inquiry. In N. K. Denzin & Y. S. Lincoln (Eds.), The SAGE handbook of qualitative research (5th ed., pp. 322–340). SAGE. Cheek, J. (2021). Maintaining the integrity of qualitatively driven mixed methods: Avoiding the “this work is part of a larger study” syndrome. Qualitative Health Research, 31(6), 1015–1018. https://doi.org/10.1177/10497323211003546 Cheek, J. (2022). The impact of funding on ways qualitative research is thought about and designed. In U. Flick (Ed.), Handbook of qualitative research design (pp. 339–354). SAGE. Cheek, J., & Ballantyne, A. (2001). Moving them on and in: The process of searching for and selecting an aged care facility. Qualitative Health Research, 11(2), 221–237. https://doi.org/ 10.1177/104973201129119064 Cheek, J., Lipschitz, D. L., Abrams, E. M., Vago, D. R., & Nakamura, Y. (2015). Dynamic reflexivity in action: An armchair walkthrough of a qualitatively driven mixed-method and multiple methods study of mindfulness training in schoolchildren. Qualitative Health Research, 25(6), 751–762. https://doi.org/10.1177/1049732315582022 Cheek, J., & Morse, J. M. (2022). The power of qualitative research in mixed methods research designs. In U. Flick (Ed.), Handbook of qualitative research design, (pp. 636–651). SAGE. Cheek, J., & Øby, E. (2018). Digitalisation, organisations and people: Issues and challenges. Exploring the digitalisation imperative in higher education institutions in Norway. [Unpublished funding application]. Østfold University College. Cheek, J., Onslow, M., & Cream, A. (2004). Beyond the divide: Comparing and contrasting aspects of qualitative and quantitative research approaches. Advances in Speech-Language Pathology, 6(3), 147–152. https://doi.org/10.1080/14417040412331282995 Cheek, J., & Rudge, T. (1994). Inquiry into nursing as textually mediated discourse. In P. Chinn (Ed.), Advances in methods of inquiry for nursing (pp. 59–67). Aspen Publishers. Chubb, J., & Watermeyer, R. (2017). Artifice or integrity in the marketization of research impact? Investigating the moral economy of (pathways to) impact statements within research funding proposals in the UK and Australia. Studies in Higher Education, 42(12), 2360–2372. https://doi.org/10.1080/03075079.2016.1144182 Coffey, A. (2018). Doing ethnography (2nd ed.). SAGE. Corbin, J., & Strauss, A. (2015). Basics of qualitative research: Techniques and procedures for developing grounded theory (4th ed.). SAGE. Creamer, E. G. (2018). An introduction to fully integrated mixed methods research. SAGE. Creswell, J. W. (2011). Controversies in mixed methods research. In N. K. Denzin & Y. S. Lincoln (Eds.), The SAGE handbook of qualitative research (4th ed., pp. 269–284). SAGE. Creswell, J. W. (2013). Qualitative inquiry & research design: Choosing among five approaches (3rd ed.). SAGE. Creswell, J. W. (2014). Research design: Qualitative, quantitative, and mixed methods approaches (4th ed.). SAGE. Creswell, J. W. (2015). A concise introduction to mixed methods research. SAGE. Creswell, J. W. (2016). 30 essential skills for the qualitative researcher. SAGE. Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative and mixed methods approaches (5th ed.). SAGE. Creswell, J. W., & Plano Clark, V. L. (2011). Designing and conducting mixed methods research (2nd ed.). SAGE.
302  References Creswell, J. W., & Plano Clark, V. L. (2018). Designing and conducting mixed methods research (3rd ed.). SAGE. Crotty, M. (1998). The foundations of social research: Meaning and perspective in the research process. Allen & Unwin. Cutcliffe, J. R., & Harder, H. G. (2012). Methodological precision in qualitative research: Slavish adherence or “following the yellow brick road?” The Qualitative Report, 17(41), 1–19. https://doi.org/10.46743/2160-3715/2012.1720 DeCuir-Gunby, J. T., Marshall, P. L., & McCulloch, A. W. (2011). Developing and using a codebook for the analysis of interview data: An example from a professional development research project. Field Methods, 23(2), 136–155. https://doi.org/10.1177/1525822X10388468 Demps, E. L. (2013). Excerpts from Elaine Demps: Understanding the faculty experience of teaching using educational technology. In Y. S. Lincoln & E. G. Guba (Eds.), The constructivist credo (pp. 83–198). Left Coast Press. Denzin, N. K. (1971). The logic of naturalistic inquiry. Social Forces, 50(2), 166–182. https://doi. org/10.1093/SF/50.2.166 Denzin, N. K. (1989a). Interpretive interactionism. SAGE. Denzin, N. K. (1989b). The research act: A theoretical introduction to sociological methods (3rd ed.). Prentice-Hall. Denzin, N. K. (2001). Interpretive interactionism (2nd ed.). SAGE. Denzin, N. K. (2008). The new paradigm dialogs and qualitative inquiry. International Journal of Qualitative Studies in Education, 21(4), 315–325. https://doi.org/10.1080/09518390802136995 Denzin, N. K. (2009). Qualitative inquiry under fire: Toward a new paradigm dialog. Left Coast Press. Denzin, N. K. (2010). The qualitative manifesto: A call to arms. Left Coast Press. Denzin, N. K., & Giardina, M. D. (2016). Introduction. In N. K. Denzin & M. D. Giardina (Eds.), Qualitative inquiry through a critical lens (pp. 1–16). Routledge. Denzin, N. K., & Lincoln, Y. S. (2000). Preface. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. ix–xx). SAGE. Denzin, N. K., & Lincoln, Y. S. (2005a). Introduction. The discipline and practice of qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), The SAGE handbook of qualitative research (3rd ed., pp. 1–32). SAGE. Denzin, N. K., & Lincoln, Y. S. (2005b). Strategies of inquiry. In N. K. Denzin & Y. S. Lincoln (Eds.), The SAGE handbook of qualitative research (3rd ed., pp. 375–386). SAGE. Denzin, N. K., & Lincoln, Y. S (Eds.). (2011). The SAGE handbook of qualitative research (4th ed.). SAGE. Denzin, N. K., & Lincoln, Y. S (Eds.). (2018a). The SAGE handbook of qualitative research (5th ed.). SAGE. Denzin, N. K., & Lincoln, Y. S. (2018b). Introduction. The discipline and practice of qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), The SAGE handbook of qualitative research (5th ed., pp. 1–26). SAGE. Denzin, N. K., & Lincoln, Y. S. (2018c). Part II: Paradigms and perspectives in contention. In N. K. Denzin & Y. S. Lincoln (Eds.), The SAGE handbook of qualitative research (5th ed., pp. 97–107). SAGE. Department of Health and Human Services. (2018). Code of federal regulations Title 45, part 46 – Protection of human subjects, subpart A (‘common rule’). https://www.ecfr.gov/on/2018-07-19 /title-45/subtitle-A/subchapter-A/part-46
References  303 Detrow, S. (2018, March 20). What did Cambridge Analytica do during the 2016 election? NPR. https://www.npr.org/2018/03/20/595338116/what-did-cambridge-analytica-do-duringthe-2016-election Duncan, M., & Watson, R. (2010). Taking a stance: Socially responsible ethics and informed consent. In M. Savin-Baden & C. H. Major (Eds.), New approaches to qualitative research (pp. 49–58). Routledge. Economic and Social Research Council. (2021). ESRC research data policy. https://www.ukri. org/publications/esrc-research-data-policy/ Fetters, M. D. (2016). “Haven’t we always been doing mixed methods research?”: Lessons learned from the development of the horseless carriage. Journal of Mixed Methods Research, 10(1), 3–11. https://doi.org/10.1177/1558689815620883 Finlay, L., & Gough, B. (2003). Prologue. In L. Finlay & B. Gough (Eds.), Reflexivity: A practical guide for researchers in health and social sciences (pp. ix–xi). Blackwell Science. Flick, U. (2015a). Introducing research methodology (2nd ed.). SAGE. Flick, U. (2015b). Qualitative inquiry—2.0 at 20? Developments, trends, and challenges for the politics of research. Qualitative Inquiry, 21(7), 599–608. https://doi.org/10.1177/ 1077800415583296 Flick, U. (2020). Introducing research methodology (3rd ed.). SAGE. Flyvbjerg, B. (2001). Making social science matter: Why social inquiry fails and how it can succeed again. Cambridge University Press. Fontana, A., & Frey, J. H. (2005). The interview: From neutral stance to political involvement. In N. K. Denzin & Y. S. Lincoln (Eds.), The SAGE handbook of qualitative research (3rd ed., pp. 695–727). SAGE. Fontana, A., & Prokos, A. H. (2007). The interview: From formal to postmodern. Left Coast Press. Freeman, M. (2017). Modes of thinking for qualitative data analysis. Routledge. Frey, J. H., & Fontana, A. (1991). The group interview in social research. Social Science Journal, 28(2), 175–187. https://doi.org/10.1016/0362-3319(91)90003-M Friels, A. C. (2016). Motivation towards success: A qualitative comparative study illustrating the differences in motivating factors in achievement between low SES high achieving and low achieving African American high school females. [Doctoral dissertation, University of South Carolina], https://scholarcommons.sc.edu/etd/3437 Gartner IT. (2018). Digitization. In Gartner IT glossary. https://www.gartner.com/en/ information-technology/glossary/digitization Geertz, C. (1973). The interpretation of cultures. Basic Books. Gergen, K. J. (1991). The saturated self: dilemmas of identity in contemporary life (Vol. 166). Basic Books. Gernsbacher, M. A. (2018). Writing Empirical Articles: Transparency, Reproducibility, Clarity, and Memorability. Advances in Methods and Practices in Psychological Science, 1(3), 403–414. https://doi.org/10.1177/2515245918754485 Gerson, N. (2019). How to protect yourself from predatory publishers and other open access FAQs. SAGE open access. https://perspectivesblog.sagepub.com/blog/industrynews/oa/howtoprot ectyourself Giles, T., King, L., & de Lacey, S. (2013). The timing of the literature review in grounded theory research. An open mind versus an empty head. Advances in Nursing Science, 36(2), E29–E40. https://doi.org/10.1097/ANS.0b013e3182902035 Glaser, B. G. (1978). Theoretical sensitivity: Advances in the methodology of grounded theory. Sociology Press.
304  References Glaser, B. G. (1998). Doing grounded theory: Issues and discussions. Mill Valley, CA: Sociology Press. Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. Aldine de Gruyter. Glasson, V. (2017). 6 ways to spot a predatory journal. Rx Communications. https://rxcomms. com/blog/6-ways-spot-predatory-journal/ Gorard, S. (2003). Quantitative methods in the social sciences: The role of numbers made easy. Continuum. Grav, C. M. (2015). Endring før endringen – en studie om organisasjonsendring før en endring er bestemt. [Master’s thesis, Østfold University College]. http://hdl.handle.net/11250/285200 Greene, J. C. (2002). Mixed-method evaluation: A way of democratically engaging with difference. Evaluation Journal of Australasia, 2(2), 23–29. https://doi.org/10.1177/ 1035719X0200200207 Greene, J. C. (2007). Mixed methods in social inquiry. John Wiley. Greene, J. C. (2008). Is mixed method social inquiry a distinctive methodology? Journal of Mixed Method Research, 2(1), 7–22. https://doi.org/10.1177/1558689807309969 Greene, J. C. (2015). Preserving distinctions within the multimethod and mixed methods research merger. In S. Hesse-Biber & R. B. Johnson (Eds.), The Oxford handbook of multimethod and mixed methods research inquiry (pp. 606–615). Oxford University Press. Greene, J. C., Caracelli, V. J., & Graham, W. F. (1989). Toward a conceptual framework for mixed-method evaluation designs. Educational Evaluation and Policy Analysis, 11(3), 255–274. https://doi.org/10.3102/01623737011003255 Greener, I. (2011). Designing social research. A guide for the bewildered. SAGE. Greenhalgh, T., Annandale, E., Ashcroft, R., Barlow, J., Black, N., Bleakley, A., Boaden, R., Braithwaite, J., Britten, N., Carnevale, F., Checkland, K., Cheek, J., Clark, A., Cohn, S., Coulehan, J., Crabtree, B., Cummins, S., Davidoff, F., & Davies, H. . . . & Ziebland, S. (2016). An open letter to The BMJ editors on qualitative research. BMJ, 352, i563. https://www.bmj.com/c ontent/352/bmj.i563.short Greenwood, M. D., & Terry, K. J. (2012). Demystifying mixed methods research: Participation in a reading group “sign posts” the way. International Journal of Multiple Research Approaches, 6(2), 98–108. https://doi.org/10.5172/mra.2012.6.2.98 Guba, E. G. (Ed.). (1990). The paradigm dialog (2nd ed.). SAGE. Guba, E. G., & Lincoln, Y. S. (1994). Competing paradigms in qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. 105–117). SAGE. Gubrium, J. F., & Holstein, J. A. (1997). The new language of qualitative method. Oxford University Press. Guest, G., Bunce, A., & Johnson, L. (2006). How many interviews are enough? An experiment with data saturation and variability. Field Methods, 18(1), 59–82. https://doi. org/10.1177/1525822X05279903 Gülay, H., & Önder, A. (2013). A study of social–emotional adjustment levels of preschool children in relation to peer relationships. Education 3-13, 41(5), 514–522. https://doi.org/10.1080/0 3004279.2011.609827 Guyotte, K. W., & Kuntz, A. M. (2018). Becoming openly faithful: Qualitative pedagogy and paradigmatic slippage. International Review of Qualitative Research, 11(3), 256–270. https://doi. org/10.1525/irqr.2018.11.3.256 Hall, R. (2013). Mixed methods: In search of a paradigm. In T. Lê & Q. Lê (Eds.), Conducting research in a changing and challenging world (pp. 71–78). Nova Science Publishers. Hamilton, M. (1960). A rating scale for depression. Journal of Neurology, Neurosurgery & Psychiatry, 23, 56–62. https://doi.org/10.1136/jnnp.23.1.56
References  305 Hammond, F. M., Davis, C. S., Hirsch, M. A., Snow, J. M., Kropf, M. E., Schur, L., Kruse, D., & Ball, A. M. (2021). Qualitative examination of voting empowerment and participation among people living with traumatic brain injury. Archives of Physical Medicine and Rehabilitation, 102(6), 1091–1101. https://doi.org/10.1016/j.apmr.2020.12.016 Hartup, W. W. (1995). The three faces of friendship. Journal of Social and Personal Relationships, 12(4), 569–574. https://doi.org/10.1177/0265407595124012 Hartup, W. W. (1996). The company they keep: Friendships and their developmental significance. Child Development, 67(1), 1–13. https://doi.org/10.2307/1131681 Henry, G. T. (1990). Practical sampling (Vol. 21). SAGE. Henry, G. T. (2009). Practical sampling. In L. Bickman & D. J. Rog (Eds.), The SAGE handbook of applied research methods (2nd ed., pp. 77–105). SAGE. Hesse-Biber, S. N. (2010a). Emerging methodologies and methods practices in the field of mixed methods research. Qualitative Inquiry, 16(6), 415–418. https://doi.org/ 10.1177/1077800410364607 Hesse-Biber, S. N. (2010b). Mixed methods research: Merging theory with practice. Guilford Press. Hesse-Biber, S. N. (2017). The practice of qualitative research (3rd ed.). SAGE. Hesse-Biber, S. N., & Leavy, P. (2006). The practice of qualitative research. SAGE. Hindess, B. (1996). Discourses of power: From Hobbes to Foucault. Blackwell. Hoza, B. (2007). Peer functioning in children with ADHD. Journal of Pediatric Psychology, 32(6), 655–663. https://doi.org/10.1093/jpepsy/jsm024 Hurst, A. L. (2008). A healing echo: Methodological reflections of a working-class researcher on class. The Qualitative Report, 13(3), 334–352. https://doi.org/ 10.46743/2160-3715/2008.1582 International Council of Nurses. (2021). The ICN code of ethics for nurses. https://www.icn.ch/ system/files/2021-10/ICN_Code-of-Ethics_EN_Web_0.pdf IoT Agenda. (2016). Definition: Internet of things (IoT). https://internetofthingsagenda.techtarg et.com/definition/Internet-of-Things-IoT Jackson, A. Y., & Mazzei, L. A. (2012). Thinking with theory in qualitative research: Viewing data across multiple perspectives. Routledge. Janesick, V. J. (2008). Egon Guba: The witty authentic maverick. International Journal of Qualitative Studies in Education, 21(6), 565–567. https://doi.org/10.1080/09518390802489063 JHL Editorial Team. (2020). References from predatory publishers: Policy statement for the Journal of Human Lactation. Journal of Human Lactation, 36(2), 219–220. https://doi. org/10.1177/0890334420912210 Johnson, R. B., Onwuegbuzie, A. J., & Turner, L. A. (2007). Toward a definition of mixed method research. Journal of Mixed Methods Research, 1(2), 112–133. https://doi. org/10.1177/1558689806298224 Jones, N. A., Ross, H., Lynam, T., Perez, P., & Leitch, A. (2011). Mental models: An interdisciplinary synthesis of theory and methods. Ecology and Society, 16(1), 46. [online] https:// ecologyandsociety.org/vol16/iss1/art46/ Jorgensen, D. L. (1989). Participant observation: A methodology for human studies (Vol. 15). SAGE. Juritzen, T. I., Grimen, H., & Heggen, K. (2011). Protecting vulnerable research participants: A Foucault-inspired analysis of ethics committees. Nursing ethics, 18(5), 640–650. https://doi. org/10.1177/0969733011403807 Karsavuran, Z. (2021). Surviving a major crisis: The case of dismissed tourism and hospitality employees. Journal of Policy Research in Tourism, Leisure and Events, 13(2), 243–265. https:// doi.org/10.1080/19407963.2020.1787421
306  References Keller, R. (2013). Doing discourse research: An introduction for social scientists (B. Jenner, Trans.). SAGE. Kirkegaard, E. O. W., & Bjerrekær, J. D. (2016a). OKCupid Kirkegaard Bjerrekær dataset. Internet Archive. https://archive.org/download/OKCupid-Kirkegaard-Bjerrekaer-dataset Kirkegaard, E. O. W., & Bjerrekær, J. D. (2016b). The OKCupid dataset: A very large public dataset of dating site users. Open Differential Psychology. https://doi.org/10.26775/ ODP.2016.11.03 Koro-Ljungberg, M. (2016). Reconceptualizing qualitative research: Methodologies without methodology. SAGE. Kunnskapsdepartementet. (2017). Digitaliseringsstrategi for universitets- og høyskolesektoren 2017–2021. https://www.regjeringen.no/no/dokumenter/digitaliseringsstrategi-for-universit ets--og-hoyskolesektoren---/id2571085/ Kuntz, A. M. (2015). The responsible methodologist: Inquiry, truth-telling, and social justice. Left Coast Press. Lahman, M. K. E. (2018). Ethics in social science research: Becoming culturally responsive. SAGE. Lather, P. (2006). Paradigm proliferation as a good thing to think with: Teaching research in education as a wild profusion. International Journal of Qualitative Studies in Education, 19(1), 35–57. https://doi.org/10.1080/09518390500450144 Leung, D. Y., Kumlien, C., Bish, M., Carlson, E., Chan, P. S., & Chan, E. A. (2021, Article 104851). Using internationalization-at-home activities to enhance the cultural awareness of health and social science research students: A mixed-method study. Nurse Education Today, 100. https://doi.org/10.1016/j.nedt.2021.104851 Lewins, F. (1992). Social science methodology: A brief but critical introduction. Macmillan Education Australia. Liamputtong, P. (2009). Qualitative research methods (3rd ed.). Oxford University Press. Liamputtong, P. (2011). Focus group methodology: Principles and practice. SAGE. Liamputtong, P. (2013). Qualitative research methods (4th ed.). Oxford University Press. Liamputtong, P. (2020). Qualitative research methods, (5th ed.). Oxford University Press. Lincoln, Y. S. (2015). Critical qualitative research in the 21st century: Challenges of new technologies and the special problem of ethics. In G. S. Cannella, M. S. Pérez, & P. A. Pasque (Eds.), Critical qualitative inquiry: Foundations and futures (pp. 197–214). Left Coast Press. Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. SAGE. Lincoln, Y. S., & Guba, E. G. (2013). The constructivist credo. Left Coast Press. Lincoln, Y. S., Lynham, S. A., & Guba, E. G. (2011). Paradigmatic controversies, contradictions, and emerging confluences, revisited. In N. K. Denzin & Y. S. Lincoln (Eds.), The SAGE handbook of qualitative research (4th ed., pp. 97–128). SAGE. Lincoln, Y. S., Lynham, S. A., & Guba, E. G. (2018). Paradigmatic controversies, contradictions, and emerging confluences, revisited. In N. K. Denzin & Y. S. Lincoln (Eds.), The SAGE handbook of qualitative research (5th ed., pp. 108–150). SAGE. Locke, L. F., Spirduso, W. W., & Silverman, S. J. (2014). Proposals that work: A guide for planning dissertations and grant proposals. (3rd ed.). SAGE. Lofland, J., & Lofland, L. H. (1995). Analyzing social settings: A guide to qualitative observation and analysis. Wadsworth. Lumsden, K. (2019). Reflexivity: Theory, method, and practice. Routledge. Macfarlane, B. (2010). Values and virtues in qualitative research. In M. Savin-Baden & C. H. Major (Eds.), New approaches to qualitative research: Wisdom and uncertainty (pp. 19–27). Routledge.
References  307 MacInnes, J. (2019). Little quick fix: See numbers in data. SAGE. Malterud, K., Siersma, V. D., & Guassora, A. D. (2016). Sample size in qualitative interview studies: Guided by information power. Qualitative Health Research, 26(13), 1753–1760. https:// doi.org/10.1177/1049732315617444 Markham, A. N. (2018). Ethnography in the digital internet era: From fields to flows, descriptions to interventions. In N. K. Denzin & Y. S. Lincoln (Eds.), The SAGE handbook of qualitative research (5th ed., pp. 650–668). SAGE. Markham, A. N., Tiidenberg, K., & Herman, A. (2018). Ethics as methods: Doing ethics in the era of big data research – introduction. Social Media + Society, 4(3), 1–9. https://doi. org/10.1177/2056305118784502 Martin, A. J. (2001). The student motivation scale: A tool for measuring and enhancing motivation. Journal of Psychologists and Counsellors in Schools, 11, 1–20. https://doi.org/10.1017/ S1037291100004301 Martin, A. J. (2002). Motivation and academic resilience: Developing a model for student enhancement. Australian Journal of Education, 46(1), 34–49. https://doi.org/ 10.1177/000494410204600104 Martin, A. J. (2003). The student motivation scale: Further testing of an instrument that measures school students’ motivation. Australian Journal of Education, 47(1), 88–106. https://doi. org/10.1177/000494410304700107 Maxwell, J. A. (1996). Qualitative research design: An interactive approach. SAGE. Maxwell, J. A. (2006). Literature reviews of, and for, educational research: A commentary on Boote and Beile’s “Scholars before researchers.” Educational Researcher, 35(9), 28–31. https://doi.org/10.3102/0013189X035009028 Maxwell, J. A. (2013). Qualitative Research Design: An Interactive Approach, (3rd ed.). SAGE. Maxwell, J. A., & Mittapalli, K. (2008). Theory. In L. M. Given (Ed.), The SAGE encyclopedia of qualitative research methods (pp. 876–880). SAGE. Maxwell, J., Chmiel, M., & Rogers, S. E. (2015). Designing integration in multimethod and mixed method research. In S. Hesse-Biber & R. B. Johnson (Eds.), The Oxford handbook of multimethod and mixed methods research inquiry (pp. 223–239). Oxford University Press. Mayan, M. J. (2009). Essentials of qualitative inquiry. Left Coast Press. Merriam, S. B., & Tisdell, E. J. (2016). Qualitative research: A guide to design and implementation (4th ed.). Jossey-Bass. Messick, S. (1980). Test validity and the ethics of assessment. American Psychologist, 35(11), 1012–1027. https://doi.org/10.1037/0003-066X.35.11.1012 Meyer, M. N. (2018). Practical tips for ethical data sharing. Advances in Methods and Practices in Psychological Science, 1(1), 131–144. https://doi.org/10.1177/2515245917747656 Miciak, M., & Daum, C. (2023). Scratching the underbelly of research design: Developing clear research question(s). In Cheek, J., & Øby, E. (2023). Research Design: Why Thinking About Design Matters (pp. 59–65). SAGE. Mikami, A. Y. (2010). The importance of friendship for youth with attention-deficit/hyperactivity disorder. Clinical Child and Family Psychology Review, 13, 181–198. https://doi.org/10.1007/ s10567-010-0067-y Miles, M. B., Huberman, A. M., & Saldaña, J. (2014). Qualitative data analysis: A methods sourcebook (3rd ed.). SAGE. Miles, M. B., Huberman, A. M., & Saldaña, J. (2020). Qualitative data analysis: A methods sourcebook (4th ed.). SAGE. Mills, K. A. (2018). What are the threats and potentials of big data for qualitative research? Qualitative Research, 18(6), 591–603. https://doi.org/10.1177/1468794117743465
308  References Morgan, D. L. (1989). Adjusting to widowhood: Do social networks really make it easier? The Gerontologist, 29(1), 101–107. https://doi.org/10.1093/geront/29.1.101 Morgan, D. L. (2016). Essentials of dyadic interviewing. Left Coast Press. Morgan, D. L. (2019). Basic and advanced focus groups. SAGE. Morse, J. M. (1991). Approaches to qualitative–quantitative methodological triangulation. Nursing Research, 40(2), 120–123. https://doi.org/10.1097/00006199-199103000-00014 Morse, J. M. (1998). The contracted relationship: Ensuring protection of anonymity and confidentiality. Qualitative Health Research, 8(3), 301–303. https://doi. org/10.1177/104973239800800301 Morse, J. M. (2008). Deceptive simplicity. Qualitative Health Research, 18(10), 1311. https://doi. org/10.1177/1049732308322486 Morse, J. M. (2017). Essentials of qualitatively-driven mixed-method designs. Routledge. Morse, J. M., & Cheek, J. (2014). Making room for qualitatively-driven mixed-method research. Qualitative Health Research, 24(1), 3–5. https://doi.org/10.1177/1049732313513656 Morse, J. M., & Cheek, J. (2015). Introducing qualitatively-driven mixed-method designs. Qualitative Health Research, 25(6), 731–733. https://doi.org/10.1177/1049732315583299 Morse, J. M., Cheek, J., & Clark, L. (2018). Data-related issues in qualitatively driven mixed-method designs: Sampling, pacing, and reflexivity. In U. Flick (Ed.), The SAGE handbook of qualitative data collection (pp. 564–583). SAGE. Morse, J. M., & Niehaus, L. (2009). Mixed method design: Principles and procedures. Left Coast Press. Nagel, D. A., Burns, V. F., Tilley, C., & Aubin, D. (2015). When novice researchers adopt constructivist grounded theory: Navigating less travelled paradigmatic and methodological paths in PhD dissertation work. International Journal of Doctoral Studies, 10, 365–383. http:// dx.doi.org/10.28945/2300 Nardi, P. M. (2018). Doing survey research: A guide to quantitative methods (4th ed.). Routledge. National Association of Social Workers. (2021). Read the code of ethics. https://www.socialwor kers.org/About/Ethics/Code-of-Ethics/Code-of-Ethics-English National Education Association. (Ed). (2021). Code of ethics of the education profession. In NEA Handbook 2020/2021 (pp. 429–430). National Science Foundation. (2014). Grant general conditions (GC-1). https://www.nsf.gov/ pubs/policydocs/gc1/feb14.pdf Nederhof, A. J. (1985). Methods of coping with social desirability bias: A review. European Journal of Social Psychology, 15, 263–280. https://doi.org/10.1002/ejsp.2420150303 Norum, K. E. (2008). Reality and multiple realities. In L. M. Given (Ed.), The SAGE encyclopedia of qualitative research (Vol. 2, pp. 737–739). SAGE. O’Leary, Z. (2017). The essential guide to doing your research project (3rd ed.). SAGE. Oppenheim, A. N. (1992). Questionnaire design, interviewing, and attitude measurement (New ed.). Pinter Publishers Ltd. O’Reilly, M., & Kiyimba, N. (2015). Advanced qualitative research: A guide to using theory. SAGE. Pangle, L. S. (2003). Aristotle and the philosophy of friendship. Cambridge University Press. Patton, M. Q. (2002). Qualitative research & evaluation methods (3rd ed.). SAGE. Patton, M. Q. (2012). Essentials of utilization-focused evaluation. SAGE. Patton, M. Q. (2015). Qualitative research & evaluation methods (4th ed.). SAGE.
References  309 Pelto, P. J. (2015). What is so new about mixed methods? Qualitative Health Research, 25(6), 734–745. https://doi.org/10.1177/1049732315573209 Polkinghorne, D. E. (2005). Language and meaning: Data collection in qualitative research. Journal of Counseling Psychology, 52(2), 137–145. https://doi.org/10.1037/0022-0167.52.2.137 Ponterotto, J. G. (2006). Brief note on the origins, evolution, and meaning of the qualitative research concept thick description. The Qualitative Report, 11(3), 538–549. https://doi. org/10.46743/2160-3715/2006.1666 Preissle, J., & deMarrais, K. (2011). Teaching qualitative research responsively. In N. K. Denzin & M. D. Giardina (Eds.), Qualitative inquiry and global crises (pp. 31–39). Left Coast Press. Presser, S. (1985). The use of survey data in basic research in the social sciences. In C. E. Turner & E. Martin (Eds.), Surveying subjective phenomena (Vol. 2, pp. 93–114). Russell Sage Foundations. Punch, K. F. (2003). Survey research: The basics. SAGE. Rager, K. B. (2005). Compassion stress and the qualitative researcher. Qualitative Health Research, 15(3), 423–430. https://doi.org/10.1177/1049732304272038 Ramcharan, P., & Cutcliffe, J. R. (2001). Judging the ethics of qualitative research: Considering the “ethics as process” model. Health and Social Care in the Community, 9(6), 358– 366. https://doi.org/10.1046/j.1365-2524.2001.00323.x Ravitch, S. M., & Riggan, M. (2017). Reason & rigor: How conceptual frameworks guide research (2nd ed.). SAGE. Richards, L. (2015). Handling qualitative data: A practical guide (3rd ed.). SAGE. Richtig, G., Berger, M., Lange-Asschenfeldt, B., Aberer, W., & Richtig, E. (2018). Problems and challenges of predatory journals. Journal of the European Academy of Dermatology & Venereology, 32(9), 1441–1449. https://doi.org/10.1111/jdv.15039 Ryen, A. (2011). Ethics and qualitative research. In D. Silverman (Ed.), Qualitative research (3rd ed., pp. 416–438). SAGE. Ryle, G. (1971). Collected papers. Volume II collected essays, 1929–1968. Hutchinson. Saldaña, J. (2021). The coding manual for qualitative researchers (4th ed.). SAGE. Sandelowski, M. (1995). Sample size in qualitative research. Research in Nursing & Health, 18(2), 179–183. https://doi.org/10.1002/nur.4770180211 Saris, W. E., & Gallhofer, I. N. (2014). Design, evaluation, and analysis of questionnaires for survey research (2nd ed.). John Wiley. Schirmer, J. (2009). Ethical issues in the use of multiple survey reminders. Journal of Academic Ethics, 7, 125–139. https://doi.org/10.1007/s10805-009-9072-5 Schostak, J. F. (2002). Understanding, designing and conducting qualitative research in education: Framing the project. Open University Press. Schostak, J., & Schostak, J. (2008). Radical research: Designing, developing and writing research to make a difference. Routledge. Schwandt, T. A. (1994). Constructivist, interpretivist approaches to human inquiry. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. 118–137). SAGE. Schwandt, T. A. (2015). The SAGE dictionary of qualitative inquiry (4th ed.). SAGE. scrapinghub. (2020). What is web scraping? https://scrapinghub.com/what-is-web-scraping Silverman, D. (2001). Interpreting qualitative data: Methods for analysing talk, text and interaction (2nd ed.). SAGE.
310  References Singer, E., & Bossarte, R. M. (2006). Incentives for survey participation: When are they “coercive”? American Journal of Preventive Medicine, 31(5), 411–418. https://doi.org/10.1016/j. amepre.2006.07.013 Singer, E., & Ye, C. (2013). The use and effects of incentives in surveys. The ANNALS of the American Academy of Political and Social Science, 645(1), 112–141. https://doi. org/10.1177/0002716212458082 Sirkin, R. M. (2006). Statistics for the social sciences (3rd ed.). SAGE. Skukauskaite, A., Noske, P., & Gonzales, M. (2018). Designing for discomfort: Preparing scholars for journeys through qualitative research. International Review of Qualitative Research, 11(3), 334–349. https://doi.org/10.1525/irqr.2018.11.3.334 Small, M. L. (2011). How to conduct a mixed methods study: Recent trends in a rapidly growing literature. Annual Review of Sociology, 37, 57–86. https://doi.org/10.1146/annurev. soc.012809.102657 Smith, J. K. (1983). Quantitative versus qualitative research: An attempt to clarify the issue. Educational Researcher, 12(3), 6–13. https://doi.org/10.3102/0013189X012003006 Sparkes, A. C., & Smith, B. (2014). Qualitative research methods in sport, exercise and health: From process to product. Routledge. Spradley, J. P. (1979). The ethnographic interview. Wadsworth/Cengage Learning. St. Pierre, E. A. (2016). The long reach of logical positivism/logical empiricism. In N. K. Denzin & M. D. Giardina (Eds.), Qualitative inquiry through a critical lens (pp. 19–29). Routledge. Stake, R. E. (2010). Qualitative research: Studying how things work. Guilford Press. Stenbacka, C. (2001). Qualitative research requires quality concepts of its own. Management Decision, 39(7), 551–555. https://doi.org/10.1108/EUM0000000005801 Stockemer, D. (2019). Quantitative methods for the social sciences: A practical introduction with examples in SPSS and Stata. Springer Nature. Strauss, A., & Corbin, J. (1998). Basics of qualitative research: Techniques and procedures for developing grounded theory (2nd ed.). SAGE. Street, A. (1995). Nursing replay: Researching nursing culture together. Churchill Livingstone. Strong, G. (2019). Understanding quality in research: Avoiding predatory journals. Journal of Human Lactation, 35(4), 661–664. https://doi.org/10.1177/0890334419869912 Swaminathan, R., & Mulvihill, T. M. (2017). Critical approaches to questions in qualitative research. Routledge. Tashakkori, A., & Teddlie, C. (2003). Preface. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social & behavioral research (pp. ix–xv). SAGE. Taylor, S. (2013). What is discourse analysis? Bloomsbury Academic. Teddlie, C., & Tashakkori, A. (2003). Major issues and controversies in the use of mixed methods in the social and behavioral sciences. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social & behavioral research (pp. 3–50). SAGE. Teddlie, C., & Tashakkori, A. (2011). Mixed methods research: Contemporary issues in an emerging field. In N. K. Denzin & Y. S. Lincoln (Eds.), The SAGE handbook of qualitative research (4th ed., pp. 285–300). SAGE. Thorne, S. (2008). Interpretive description. Left Coast Press. Thorne, S., Oliffe, J., Kimsing, C., Hislop, T.G., Stajduhar, K., Harris, S.R., Armstrong, E.A., & Oglov, V. (2010). Helpful communications during the diagnostic period: An interpretive description of patient preferences. European Journal of Cancer Care, 19, 746–754. https://doi. org/10.1111/j.1365-2354.2009.01125.x Tourangeau, R., & Bradburn, N. M. (2010). The psychology of survey response. In P. V. Marsden & J. D. Wright (Eds.), Handbook of survey research (2nd ed., pp. 315–346). Emerald Group.
References  311 Tracy, S. J. (2010). Qualitative quality: Eight “big-tent” criteria for excellent qualitative research. Qualitative Inquiry, 16(10), 837–851. https://doi.org/10.1177/1077800410383121 Tracy, S. J. (2020). Qualitative research methods: Collecting evidence, crafting analysis, communicating impact (2nd ed.). John Wiley . Tracy, S. J., & Rivera, K. D. (2010). Endorsing equity and applauding stay-at-home moms: How male voices on work-life reveal aversive sexism and flickers of transformation. Management Communication Quarterly, 24(1), 3–43. https://doi.org/10.1177/0893318909352248 Tubbs, F. (2018, November 21). The impact of big data on marketing. The Data Administration Newsletter. https://tdan.com/the-impact-of-big-data-on-marketing/24044 van den Hoonaard, W. C., & van den Hoonaard, D. K. (2013). Essentials of thinking ethically in qualitative research. Left Coast Press. Vehovar, V., Toepoel, V., & Steinmetz, S. (2016). Non-probability sampling. In C. Wolf, D. Joye, T. W. Smith, & Y. Fu (Eds.), The SAGE handbook of survey methodology (pp. 329–345). SAGE. Wacquant, L. (2002). The curious eclipse of prison ethnography in the age of mass incarceration. Ethnography, 3(4), 371–397. https://doi.org/10.1177/1466138102003004012 Welford, C., Murphy, K., & Casey, D. (2012). Demystifying nursing research terminology: Part 2. Nurse Researcher, 19(2), 29–35. https://doi.org/10.7748/nr2012.01.19.2.29.c8906 Wentzel, A. (2018). A guide to argumentative research writing and thinking: Overcoming challenges. Routledge. Wiles, R. (2013). What are qualitative research ethics? Bloomsbury Academic. Wolcott, H. F. (1994). Transforming qualitative data: Description, analysis, and interpretation. SAGE. Wolcott, H. F. (2008). Ethnography: A way of seeing (2nd ed.). AltaMira Press. Woolley, C. M. (2009). Meeting the mixed methods challenge of integration in a sociological study of structure and agency. Journal of Mixed Methods Research, 3(1), 7–25. https://doi. org/10.1177/1558689808325774 World Medical Association. (2018). WMA Declaration Of Helsinki – ethical principles for medical research involving human subjects. https://www.wma.net/policies-post/wma-declaration -of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/ Yin, R. K. (2006). Mixed methods research: Are the methods genuinely integrated or merely parallel. Research in the Schools, 13(1), 41–47. Yin, R. K. (2009). Case study research: Design and methods, (4th ed.). SAGE. Yin, R. K. (2014). Case study research: Design and methods (5th ed.). SAGE. Yin, R. K. (2015). Causality, generalizability, and the future of mixed methods research. In S. Hesse-Biber & R. B. Johnson (Eds.), The Oxford handbook of multimethod and mixed methods research inquiry (pp. 652–664). Oxford University Press. Yin, R. K. (2016). Qualitative research from start to finish (2nd ed.). Guilford Press.
  
INDEX anonymity, 27, 29, 32-40, 158, 233 pseudonyms and, 32-34 uses of, 33-39 versus confidentiality, 32-33 audit trail: see rigor in qualitative research bias: see social desirability bias CAQDAS: see computer-assisted qualitative data analysis software categories: see coding codebook, use in qualitative analysis, 164-65 coding in qualitative analysis, 125, 152, 154, 155, 161-68, 176, 179 categories, 162-63 choice of strategy, 165-66 codes, 155, 162-65 critiques of, 168 data condensation, 159-61 definition of, 152, 161 in grounded theory, 166 inductive vs deductive approach to, 163-65, 228 when and why to use, 168 computer-assisted qualitative data analysis software, 161-62 confidentiality of participants, 27, 29, 32-35, 37, 42, 158 description of, 32 versus anonymity, 32 consent: see informed consent convergent design: see mixed methods research core component: see mixed methods research correlational approach: see quantitative data analysis, approaches in credibility: see qualitative research data, 12 definition of, 13, 71-73, 206 demographic, 34, 126 digital, 37 internet information as, 37-39 interval/ratio in quantitative research, 201-2 methodological considerations, 72-74 nominal in quantitative research, 199-200, 202 ordinal in quantitative research, 200, 202, 204 photographs, 133 reuse and purposing, 35-37 sharing of, 35-37 data analysis: see qualitative data analysis; quantitative data analysis data collection: see qualitative data collection; quantitative data collection data condensation: see coding in qualitative analysis ‘declaring your hand’, 271-82, 285 deductive reasoning, 55-58, 93-94 description of, 56 inductive reasoning versus, 93 quantitative research use of, 93-94 descriptive approach: see quantitative data analysis, approaches in discourse analysis, 107-8 epistemology definition of, 76 impact on research design, 76-78 see also: inquiry paradigms ethics, 20-21, 27-43, 231, 283 and incentives for participation, 231 Belmont Report, 39 codes of, 41 definition of, 28 description of, 20-21 issues in, 34-35 professional guidelines, 41 see also: anonymity; confidentiality of participants; informed consent; vulnerable populations ethics committees, 39-41, 173 ethnography: see qualitative research experimental approach: see quantitative data analysis, approaches in external validity: see validity in quantitative research feasibility of a research design, 47, 51-53 focus groups, 130-31, 145, 277 moderator in, 130 313
314  Index generalizability: see quantitative research grounded theory, 155, 166-67 analytical strategy for, 166 coding in, 166-67 description of, 166 hypothesis definition of, 56, 197 developing, 94, 196 power of a hypothesis test, 197-98 research question as, standardized observations and, 128 testing, 94, 103, 197-98 hypothetico-deductive thinking, 94, 97 inductive reasoning deductive reasoning versus, 93 description of, 57-58, 96-97 inquiry based on, 57, 82. 94-95 qualitative research use of, 94-95, 98, 164 informed consent considerations for, 30-31 definition of, 29 for internet data, 38-39 layers of, 36-37 in longitudinal study, 31 purpose of, 29 for reusing, repurposing, and sharing of data, 36 seeking, 30 vulnerable populations and, 31-32 written form, 30 inquiry paradigms, 78-85, 96. 99-101 anti-paradigmatic stance, 260-61 connection to methodology, 78, 82-85 constructivism, 81-82, 99, 100 definition of, 74 description of, 74-75 mixing, 259-61 paradigmatic stance, 74-76, 82-85 positivism, 78-80, 100, 142 post-positivism, 80-81, 100, 142 pragmatism, 260 see also: epistemology; ontology interpretivism: see inquiry paradigms interviews in qualitative research data analysis in, 154-56 individual interviews, 129-31, interview guide, 132, 134, 137-38 interview probes, 134 lines of inquiry, 132-36, 138 memos in, 155-56 online, 145 questions, designing, 135-36 recording of, 157-58 sample selection, 140-41 sample size, 172-73 structure of, 125-27 transcription, 157-58 trial interviews, 136-37 see also: focus groups iteration in qualitative data analysis, 152-61 in research design, 2-4, 18, 58-65, 281-284 schematic example of iterative process, 283 layers of consent: see informed consent literature books and book chapters, 10 categorization of, 8 decision making about, 8-12 gray literature, 11-12 in research question development, 53-55 journal articles, 9-10 open access, 9-10 peer review in, 9 predatory journals, 10-11 relevant, 5-8 working with, 5-8 literature review, 7-8, 179 measurement in quantitative research consistency, 226 level of, 115, 201-02 reliability of, 226-27 see also: validity in quantitative research measurement instruments in quantitative research definition of, 199, 211 design and development of, 199, 213-29 implementation of, 207, 229-35 presentation of, 213-14, 231 pretesting of, 224-25, 234-35 purpose of, questionnaires, surveys: see surveys in quantitative research response rates for, 230-31 validity affected by, 229 see also: surveys measurement items in quantitative research consistent measurements obtained using, 226-27 definition of, 213 demands on respondent, 224 development of, 22024 form of, 227-29 response types for, 227-28 memos: see qualitative data analysis methodology data and, 71-74 definition of, 12, 14, 69, 71
Index  315 mixed methods research characteristics of, 243-47 as contested field, 246-47 components in, 244-45 definitions of, 242-45 design, types of, 251 designing, 244-57 diagramming in, 262 mixing in, 257-61 paradigmatic-related considerations, 259-61 priority and timing of components, 249-53 purpose in, 248, 253-54 quasi-mixed designs, 261 reasons for using, 247-49, 253-54 strategies for, 262-65 typologies, 253 vs multimethod research, 245 see also: notation in mixed methods research mixing: see mixed methods research moderator: see focus groups multimethod research: see mixed methods research notation in mixed methods research capitalization in, 249 description of, 249-51 + sign, 251 → sign, 251 online information, trustworthiness of, 12 onto-epistemological assumptions, 76-78 ontology definition of, 76 impact on research design, 76-78 realism, 76 relativism, 76-77 see also: inquiry paradigms paradigmatic stance: see inquiry paradigms positivism: see inquiry paradigms post-positivism: see inquiry paradigms pseudonyms: see anonymity qualitative data analysis coding: see coding in qualitative analysis interpretation, 168-71 memos, 154-56, 158-59, 162 overview of, 152-54 steps involved in, 153-54, 157-59, timing of, 153, 157-58, 168-69 iterative and reflexive thinking in, 152-54, 157-61, strategies for, 157 trustworthiness of, 151, 170-71 see also: coding in qualitative analysis quantitative data analysis, approaches to, 102-03 procedures in, 193-94 statistical analysis, 213, 223, 228-29 see also: thick description; thick interpretation qualitative data collection, data saturation in, 171-72 description of, 98–99, 119–122, 145 nonstandardized observations, 129 reflexive thinking in, 127-29 sensitizing concepts, 137-39 strategies for, 157-59 theoretical saturation, 172 timing of, 156-57 quantitative data collection, description of, 211-12 strategies for, 214-20, randomized controlled trials, 102-03 standardized observations, 127-28 see also: measurement instruments; measurement items; surveys in quantitative research; variables; validity in quantitative research qualitative research basic form of, 105 “big tent” criteria for, 109-11 data produced by, 92, 99 definition of, 92, 96 design considerations, 123 diversity of, 105–106, 110, 112, 121, 123–124 features of, 96-101 inductive reasoning in, 94-95 inquiry strategies, 122-25 ontological position of, 98 purpose of, 92-93 quantitative research and, 92-118 specialized forms of, 106-10 trustworthiness in, 99, 170-71 quantitative research credibility of, 184-96, 205, 217 deductive reasoning in, 93-94 definition of, 92-93, 96-97 design considerations, 181-85 diversity of, 101-105 features of, 13, 96-101 generalizability of findings, 97, 142, 187-90 methods used in, 103 ontological position of, 97 purpose of, 92-93 qualitative research and, 92-118 quasi-experimental approach, 101-02, 196 statistics in, 182-83 quasi-experimental approach: see quantitative research
316  Index quasi-mixed design: see mixed methods research questionnaires: see measurement instruments in quantitative research randomized controlled trial: see quantitative data collection realism: see ontology reasoning. see deductive reasoning; inductive reasoning reflexivity, 1, 18-19, 284 reflexive questioning, 54 reflexive research questions, 58 reflexive thinking, 18–21, 32, 124, 138, 156, 263–264, 266, 280 see also: qualitative data analysis relativism: see ontology reliability: see validity in quantitative research research qualitative: see qualitative research quantitative: see quantitative research; mixed methods research substantive area of, 5 theoretical framework of, 14 research area, 21, 50, 182-83 research ethics: see ethics research design, cyclic thinking in, 4 definition of, 2 see also: inquiry paradigm; methodology; qualitative research, design considerations; quantitative research, design considerations research problem definition of, 50 focusing of, 51 key words in, 52 research questions development of, 58-65 forms of, 55 as hypothesis, 197 iterative development, 58 literature used in development of, 53-55 response rate: see measurement instruments in quantitative research rigor in qualitative research, 168, 170-71 audit trail, 168 see also: qualitative data analysis, trustworthiness of sampling in qualitative research size, 141-42, 173-75 decisions regarding, 140-43 strategies for, 140-45 sampling in quantitative research considerations in, 185-90 estimation in, 186 size, 185-86 strategies for, 185, 187-191 saturation: see qualitative data collection scientific method, 79 sensitizing concepts: see qualitative data collection social desirability bias, 232 statistical reasonableness: see quantitative research study population in quantitative research analysis procedures for, 193-95 definition of, 185 description of, 185-86 research questions about, 195-96 selection of, 185-87, 192 statistically reasonable claims about, 187 see also: sampling in quantitative research; variables surveys in quantitative research, 103, 125, 213-14, 229-34 response rates for, 230-31 uses of, 103 see also: measurement instruments in quantitative research, questionnaires theoretical saturation: see qualitative data collection theory complexity of, 17 definitions of, 15-16 impact on research design, 16-17 thick description, 94, 99, 169 thick interpretation, 57, 99, 169 transcription: see interviews in qualitative research triangulation, 84, 171, 248 validity in quantitative research construct validity, 61, 223 content validity, 61, 220 definition of, 213 internal validity, 90 external validity, 90, 187, 191, 212 of measurement, 226-27 statistical, 183, 212 vs reliability, 226-27 variables construct of, 216-19 definition of, 56, 192, 199 effect of number of, 189 measurement of, 199-201, 212–222 operational definition of, 216-19, 221 vulnerable populations definition of, 31 informed consent for, 31-32