Chapter 1INTRODUCTION
Your client has come to you with a number of complaints. "I just feel lousy. I don't have any energy, I just never want to go out anymore. Sometimes, especially at night, I find myself crying over nothing. I don't even feel like eating. My boss pushes me around at work: he makes me do all kinds of things that the other employees don't have to do. But I can't tell him I won't do the work. Then I feel even worse." After a good deal of exploration, you help the client focus the complaints until you see certain patterns. Among them, the client seems to be depressed, have low self-esteem, and to be very unassertive. You devise an intervention program to work with all of these areas, but one thing is missing. Although you're pretty sure your interventions will be the right ones, other than asking the client how she feels and maybe making a few observations on your own, you don't have a clear way of assessing accurately whether or not there will be real improvement in each of these areas. After all, they are pretty hard to measure with any degree of objectivity. Or are they?
This book will help you answer that question. Although we don't pretend to have all the answers to all the questions you might have on how to measure your client's problems, we hope to be able to help you grapple with a most important issue for clinical practice: how we can more or less accurately and objectively, and without a lot of aggravation and extra work, measure some of the most commonly encountered clinical problems.
In the short case example at the beginning of this chapter, there actually are several ways a practitioner could have measured the client's problems. We will briefly examine several methods of measurement in this book, but the main focus is on one: the use of instruments that the client can fill out himself or herself, and that give a fairly clear picture of the intensity or magnitude of a given problem. In the case example presented earlier, the practitioner might have selected one of several readily accessible, easy-to-use instruments to measure the client's degree of depression, level of self-esteem, or assertiveness. Indeed, instruments to measure each of these problems are included in Part II of this volume and all of Volume 2.
ACCOUNTABILITY IN PRACTICE
The last decade or so has seen increasing pressure brought to bear on practitioners to be "accountable" for what they do in practice. Although the term accountability has several meanings, we believe the most basic meaning of accountability is this: we have to be responsible for what we do with our clients. The most crucial aspect of that responsibility is a commitment to delivering effective services.
There are very few in the human services who would deny the importance of providing effective services to clients as a major priority. Where the differences come about is in deciding how to go about implementing or operationalizing this commitment to providing effective services.Conscientious monitoring and measurement of one's practice and the client's functioning is a primary way to fulfill this commitment.
Use of Research in Practice
Let's face it. Many human services practitioners are not very enamored with research, often because they do not see its value -- how it can really make a difference in practice.
Part of the problem may be that researchers have not done their best to demystify the research process. Most research texts reflect a way of thinking about many phenomena that is very different from the way many practitioners view the world. After all, most of us in the helping professions are there because we want to work with people, not numbers.
But research does offer some very concrete ways of enhancing our practice. First of all, recent years have seen a tremendous increase in the number of studies with positive outcomes. Many hundreds of studies point to a wide range of clinical techniques and programs that have been successful in helping clients with a multitude of problems (Fischer, 1981, 1993). Thus, practitioners can now select a number of intervention techniques or programs on the basis of their demonstrated success with one or more problem configurations as documented in numerous studies (e.g., see Barlow, 1985; Corcoran, 1992). A second practical value of research is the availability of a range of methods to help us monitor how well we are doing with our clients, that is, to keep track of our clients' problems over time and, if necessary, make changes in our intervention program if it is not proceeding as well as desired. This is true whether our clients are individuals, couples, families, or other groups.
And third, we also have the research tools to evaluate our practice, to make decisions about whether or not our clients' problems are actually changing, and also whether or not it was our interventions that helped them change. Both of these areas -- the monitoring and the evaluating of practice -- are obviously of great importance to practitioners, and the practical relevance of the research tools is what makes recent developments in research so exciting for all of us.
By and large, the recent developments that have made research more accessible to practitioners have come about in two areas: evaluation designs for practice, and new measurement tools. While the focus of this book is on measurement, we will provide a brief review of designs for practice, and the relation of these designs to measurement.
EVALUATION DESIGNS FOR PRACTICE
There is a wide variety of evaluation designs that can provide useful information for practice. The most common -- the ones most practitioners learned about in their educational programs -- are the experimental designs, field studies, and surveys in which the researcher collects data on large groups of people or events and then analyzes those data using a variety of mathematical and statistical techniques. Some of these designs (e.g., those using random assignment, control and contrast groups) are best suited for informing our practice about which interventions work best with what clients with what problems in what situations. But despite their value these designs are rarely used in actual practice by practitioners, because they often require more sophisticated knowledge, time, or resources than are available to most practitioners.
A second set of designs that can be of value to practitioners are the single-system designs. These designs, which allow practitioners to monitor and evaluate each case, have been called by a number of terms: single case experimental designs, single subject or single N designs, time series designs, and single organism designs. While all these terms basically refer to the same set of operations, we prefer the term single-system design because it suggests that the designs do not have to be limited to a single client but can be used with couples, families, groups, organizations, or larger collectivities.
These designs, elaborated in several recent books (Bloom, Fischer, and Orme, 1994; Bloom and Fischer, 1982; Jayaratne and Levy, 1979; Kazdin, 1982; Barlow et al., 1984; Barlow and Hersen, 1984; Kratochwill, 1978), are a relatively new development for the helping professions. And while this new technology is increasingly being made available to practitioners, a brief review of the basic components of single-system designs is in order.
The first component of single-system designs is the specification of a problem which the practitioner and client agree needs to be worked on. This problem can be in any of the many areas of human functioning -- behavioral, cognitive, affective, or the activities of individuals or groups.
The second component is selecting a way to measure the problem. In the past, finding ways to measure problems has been a major stumbling block for many practitioners. But there are now a wide variety of ways to measure problems -- some of which were once thought to be "unmeasurable" -- available to practitioners of diverse theoretical orientations. These will be discussed throughout the rest of this book.
The third component is the implementation of the design itself -- the systematic collection of information about the problem on a regular basis. This generally starts before the intervention proper is begun -- the baseline -- and continues over time until the intervention is completed. This use of "repeated measures" -- collecting information on a problem over time -- is a hallmark of single-system designs, and provides the basis for the monitoring and evaluation functions described earlier.
The essence of single-system designs is the comparison of the intensity, level, magnitude, frequency, or duration of the problem at different phases of the process. These comparisons are typically plotted on a graph, as shall be illustrated below, to facilitate visual examination. For example, one might concoct an elementary study of a client's progress by comparing information collected during the baseline on the client's level of depression with information collected during the intervention period to see if there is any change in the problem. A graphed example of such a design, called an A-B design (A = baseline; B = intervention), is presented in Figure 1.1. As with all single-system designs, the level of the problem is plotted along the vertical axis and the time period is plotted along the horizontal axis. In this case, the client was assessed using a self-administered depression scale once a week. During a three-week assessment period (the baseline) the client filled out the questionnaire three times. The intervention was begun the fourth week and the steady decline in scores shows that the level of the client's depression was decreasing.
A more sophisticated design, combining or alternating different intervention or nonintervention (baseline) phases, can indicate not only whether the client's problem changed, but whether the practitioner's intervention program is responsible for the change. An example of one of several designs that can provide evidence of the relationship between the intervention and the change in the client's problems is presented in Figure 1.2. This example, in which the goal is to increase the client's assertiveness, is called an A-B-A-B (reversal or withdrawal) design. Evidence of the link between intervention and a change in the problem is established by the fact that the problem diminishes only when the intervention is applied and returns to its previous level when the intervention is withdrawn or applied to another problem.
As the practitioner collects information on the problem, he or she also is getting feedback on whether the intervention is producing the desired effects and therefore should be continued, or should be changed, this is the monitoring function. An example of the effects of such monitoring is presented in Figure 1.3. In this example, the practitioner was not satisfied with the slow progress shown in the first intervention period (Phase B) and changed his intervention to produce a more positive result (Phase C).
Finally, a review of all the information collected will provide data on success in attaining the desired goal -- the evaluation of the outcome.
Single-system designs seem to offer excellent opportunities for actual utilization. They can be built into practice with each and every case; they provide direct feedback enabling the practitioner to readily assess, monitor, and evaluate the case; and they allow the practitioner to make changes in the intervention program if it appears not to be working. Thus, the instruments described in this book will probably find their most frequent use in the context of single-system designs.
However, the same instruments can be -- and have been -- used in classical research. The selection of a design depends on the question one is asking: for some questions, a classical design is more appropriate; for others, one of the single-system designs would be the design of choice. (A comparison of the characteristics, advantages, and disadvantage of classical and single-system designs is available in Bloom, Fischer, and Orme, 1994). The use and administration of the measure would vary with the design, from once in a cross-sectional survey to pre- and post-test administration in a classical experiment, to repeated administration perhaps once or twice weekly -- in a single-system design that lasts for several weeks.
The Role of Measurement
One of the key challenges of all types of research, and practice as well, is finding a way to measure the problem. Measurement helps us be precise in defining problems and goals. It is measurement of the client's problems that allows feedback on the success or failure of treatment efforts, indicating when changes in the intervention program are necessary. Measurement procedures help standardize and objectify both research and practice. Using procedures that can also be used by others provides a basis for comparing results of different intervention programs. In a word, measurement of our clients' problems helps us know where we are going and when we get there.
Because formal measurement procedures provide some of the best bases for evaluating what we do, they are essential components of responsible, accountable practice. In Chapter 3 we will review a range of measurement procedures available to practitioners. However, the focus of this book is on one type of measure: standardized paper-and-pencil questionnaires that can be filled out by the client in a relatively short period of time, that can be easily administered and scored by the practitioner, and that give fairly accurate pictures of the client's condition at any point in time and/or over a period of many administrations. These types of measures have been called rapid assessment instruments (RAIs) by Levitt and Reid (1981).
We believe that these are among the most useful of all measurement tools for reasons that will be discussed throughout this book. Suffice it to say at this point that although there are many examples of these instruments, until now they have not been available, reprinted in their entirety, in a single source. It is our hope that by compiling these instruments in this book, we will encourage far greater utilization of them in the everyday practice of the helping professions.
Copyright © 1994 by Joel Fischer and Kevin CorcoranCopyright © 1987 by The Free Press