This review has been accessed times since January 29, 2002

Lipsey, Mark W., and Wilson, David B. (2001). Practical Meta-Analysis. Applied Social Research Methods Series (Vol. 49). Thousand Oaks, CA: SAGE Publications.

Pages ix + 247.

$69.95 (Cloth) 0-7619-2167-2
$26.95 (Paper) 0-7619-2168-0

Reviewed by Marie Miller-Whitehead

January 29, 2002

It almost looks as if analysis were the third of those "impossible" professions in which one can be quite sure of unsatisfactory results. The other two, much older-established, are the bringing up of children and the government of nations.—Freud

What is a cynic? A man who knows the price of everything and the value of nothing.—Oscar Wilde

Perhaps the above quotations express one of the more paradoxical phenomena that researchers in the social sciences encounter: the necessity of quantifying the nearly unquantifiable to justify that which quite possibly needs no such justification. Philosophy, religion, health care, education, the arts: mankind has embraced these because they are "the right thing to do," they uplift the spirit, they stimulate the imagination, they provide hope for the future, they are balm for the soul, they bring comfort to those in pain or distress. And yet, while man may find satisfaction in having created something of beauty, of having reached the completion of a challenging task, or of having reached out to help those less fortunate, at some point he (or she) will be compelled to reflect upon and perhaps measure the degree of "success" in the endeavor, perhaps asking, "Is what I have accomplished worth the effort that has been expended, or would my time have been better spent elsewhere?"

While I do not propose this review as an excursion into the historical foundations of social science research in general and meta-analysis in particular, the authors of Practical Meta-Analysis trace the now widely-used term "meta-analysis" to G. V Glass and his methodological approach to the analysis of 375 psychotherapy studies in response to an ongoing debate begun in 1952 over the effectiveness (or lack thereof) of psychotherapy. According to Lipsey and Wilson, the subsequent publication of findings by Smith and Glass in 1977 had two important results: they established meta-analysis as an accepted and now widely-used "method of summarizing the results of empirical studies within the behavioral, social, and health sciences" (p. 1) and they lent credence to a discipline that was at that time struggling for evidence of its efficacy in the treatment of psychological disorders. Glass, McGaw, & Smith (1981) summarized the criticisms of their method of standardizing and averaging research results as falling into four categories: the apples and oranges problem, the use of data from "poor studies," selection bias in reported research, and the use of nonindependent data (p. 218). Despite its wide acceptance, recent literature suggests that critics, continue to point out the limitations of meta-analysis particularly when there is an interest in identifying causal relationships: for example Cronbach "believes the social system needs causal explanatory knowledge, not knowledge of things that work on an average under a set of diverse conditions" (p. 363). On the other hand, Stake points out that meta-analysis is widely used because "numbers make it possible to keep track of large amounts of data" (p. 312) and that often clients find quantitative approaches more credible and useful than qualitative ones. Campbell, however, believes that meta-analyses "have been widely accepted and have changed the thinking about some particular class of solutions" (p. 159) particularly "in social sciences where quantified predictions are rarely made (p. 159).

When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind: it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science.—Lord Kelvin

The authors of Practical Meta-Analysis have conducted extensive research on programs for at-risk and delinquent juveniles, an area where mixed results are not uncommon, and their work expands upon the methodology established with the publication of Meta-Analysis in Social Research (Glass et al., 1981). In 8 chapters and 5 Appendixes Lipsey and Wilson address issues related to searching for and selecting appropriate studies, selection of effect size measures, coding, data management, analysis strategies, computational techniques, and interpretation of results. The Appendixes provide bibliographical database resources, effect size measures, macros for SPSS and Excel, and coding forms.

Recognizing that most researchers are stronger in some areas than others, in Chapter 1 the authors provide an overview of the basics of database retrieval and suggestions for search strategies not only to limit irrelevant (or "poor") studies but to find and include those that may be the most helpful to the researcher who is seeking studies that include some minimum number of participants, generally necessitating multiple database searches to avoid selection bias. At issue, and particularly important in meta-analysis, is the lack of overlap that is often found in multiple database searches as well as the differences among results likely to be published in journals, monographs, conference proceedings, dissertations, theses, government publications, and the like (Glass et al., 1981; Lipsey & Wilson, 2001). For example, the authors found it advisable to limit searches for "alcohol" and "aggression" with "NOT" and "animals" to exclude the many studies conducted with animal subjects, if the intent of the meta-analysis was to examine the effects of alcohol on aggressive behavior in humans (the apples and oranges issue).

Once the researcher has retrieved a sufficient number of studies, or has exhausted the population of those that fit a predetermined criteria for inclusion in the meta-analysis (such as number of participants), there remains the organization of the studies in some coherent and meaningful way; the authors provide several templates for coding the studies that are included in the meta-analysis (pp. 85, 221). Lipsey and Wilson begin with a strategy and decision tree (p. 68) for encoding effect size statistics and related computations of variance weights and standard errors, given multiple studies with differing numbers of participants, addressing as well the coding of studies for which no effect size was reported and with insufficient information given to compute one (i.e., if the results were simply reported as non-significant). According to the authors and others who have published research on meta-analysis and effect size, the use of multivariate findings in meta-analyses "will be greatly facilitated by a change in the publication norms such that reporting correlation matrices becomes standard" (p. 69).

The exigencies of meta-analysis are such that it is likely that more than one person will be working with the data; thus coder training and reliability and a systematic method of data management are more than advisable, particularly if temporary help will be used who may not be available later to answer questions about procedures and data coding. For example, while there are many quite excellent bibliographic database management software programs available to help researchers who may catalog hundreds or even thousands of articles, it will probably be necessary to create a specialized database or spreadsheet to keep track of additional information about each of the articles or studies included in a meta-analysis. Chapters 4 and 5 provide discussions of study descriptors and coding for the effect size measures that may be encountered, creation and use of flat files and multiple merge files, and also addresses the related pros and cons of entering code directly into the computer or transferring data to a code form prior to entering the coded data into the computer.

The remaining chapters (6-8) are devoted to analysis, interpretation, and presentation of results in graphs, tables, and figures. Because the unit of analysis in a meta-analysis is the study rather than the individual, researchers in the behavioral sciences who work mainly with individuals may not be used to the complications related to this level of analysis; failure to weight studies by sample size has led to criticism of "lumpy" or non-independent data in meta-analysis. Lipsey and Wilson discuss data transformations, treatment of missing data, and various adjustments that may be necessary to minimize or avoid this sort of problem, as well as providing a wealth of examples of various analyses that may be conducted with the data.

As the authors point out, "Meta-analysis results are only as good as the studies that are included in the meta-analysis" (p. 157). Postpositivists seeking the identification of definitive causal relationships among the variables by use of experimental or quasi-experimental designs will continue to find much lacking in the meta-analytic approach, but pragmatists and positivists will find it a useful method for bringing order to chaos: an invaluable tool for the organization, analysis, and interpretation of large amounts of data for presentation to audiences with diverse interests and agendas, as, for example, a recent meta-analysis of student achievement and federal evaluation of Title 1 programs, a program with many participants and many stakeholders (Borman & D'Agostino, 1996) and the use of meta-analysis in systematic reviews of multiple RCTs as producing "the best external evidence for use in answering questions about therapeutic interventions" (Miller & Crabtree, 2000, p. 612).

References

Borman, G. D., & D'Agostino, J. V. (1996). Title 1 and student achievement: A meta-analysis of federal evaluation results. Educational Evaluation and Policy Analysis, 18(4), 309-326.

Campbell, D. T. (1991). Methodologist of the experimenting society. In W. Shadish, Jr., T. D. Cook, and L. Leviton (Eds.), Foundations of program evaluation (pp. 119-170). Newbury Park: SAGE Publications.

Cronbach, L. J. (1991). Functional evaluation design for a world of political accommodation. In W. Shadish, Jr., T. D. Cook, and L. Leviton (Eds.), Foundations of program evaluation (pp. 323-376). Newbury Park: SAGE Publications.

Glass, G. V, McGaw, B., & Smith, M. L. (1981). Meta-analysis in social research. Beverly Hills: SAGE Publications.

Miller, W. L., & Crabtree, B. F. (2000). Clinical research. In N. K. Denzin and Y. Lincoln, (Eds.), Handbook of qualitative research (2nd ed.) (pp. 607-631). Thousand Oaks, CA: SAGE Publications.

Stake, R. E. (1991). Responsive evaluation and qualitative methods. In W. Shadish, Jr., T. D. Cook, and L. Leviton (Eds.), Foundations of program evaluation (pp. 270-314). Newbury Park: SAGE Publications.

About the Reviewer

Marie Miller-Whitehead
Director
Tennessee Valley Educators for Excellence
TVEE.ORG
PO Box 2882
Muscle Shoals, AL 35662

Email: marie@tvee.org
Personal web-page: http://www.dpo.uab.edu/~tnmarie/mmm.htm

Research interests: program evaluation and research, school district accountability indicators, computer assisted learning, educational politics and policy, educational equity for minorities and underserved populations. The reviewer has a Ph.D. in Educational Leadership from the University of Alabama at Birmingham and additional postdoctoral work in educational research.

[ home | overview | reviews | editors | submit | guidelines | announcements ]