This review has been accessed times since October 9, 2002

Arreola, Raoul (2000).Developing a Comprehensive Faculty Evaluation System: A Handbook for College Faculty and Administrators on Designing and Operating a Comprehensive Faculty Evaluation System, 2nd Ed., Boulton, MA: Anker Publishing.

Pp. vi + 230

$49.95       ISBN 1-882982-32-0

Reviewed by Thomas Diamantes
Wright State University

October 9, 2002

The concept of tenure is an integral part of the employment relationship between institutions of higher education and individual faculty members (Mawdsley, 1999). Promotion and tenure decisions are often difficult and always have important long-term consequences for both the candidate and the institution (Rhoades-Catanach & Stout, 2000). As a colleague once observed, “ tenure establishes a marriage between a faculty member and the university where divorce is not an option!”

How should the professorate be assessed? Lane Cooper, in his introduction to The Rhetoric of Aristotle, p. xxiii, says, “People in actual life make their choices first, and then argue in accordance with those choices. Few argue a matter out so as then to make their choices from reasonable inference.” Some committees use tenure and promotion documents simply to strengthen their argument for or against candidates; their minds are made up very early in the review, long before the vote is even considered. What should guide promotion and tenure committees? A tenure and promotion committee could use Scholarship Assessed to answer the question; in judging a faculty member’s performance, what are the criteria to be used? The committee could use the six dimensions of good scholarship to judge their peers: clear goals, adequate preparation, appropriate methods, significant results, effective presentation and reflective critique (Glassick, Huber & Maeroff, 1997).

This is clearly a political process that leaves so much to chance. In, Developing a Comprehensive Faculty Evaluation System, (hereafter referred to as Faculty Evaluation System) Raoul Arreola explains a method of designing and operating a faculty evaluation system. He states that it is not a “canned” faculty assessment program nor is it a fully developed evaluation system complete with forms and policies ready to be used by colleges and universities. Professor Arreola has developed a systematic, practical procedure for building a faculty evaluation system based on sound administrative principals and research evidence (p. xv.) Citing demand for accountability, Arreola’s Faculty Evaluation System explains that the practice of tenure itself has recently been questioned and calls for post-tenure evaluation has been heard.

An important part of Arreola’s Faculty Evaluation System is the explanation of what faculty evaluation is and how it can be linked to faculty development. Additionally, the purposes of faculty evaluation systems are to provide feedback for self-improvement and to provide data for personnel decisions. Arreola’s Faculty Evaluation System shows how to use a faculty evaluation system to develop an overall comprehensive rating score to aid in using faculty evaluation in promotion and tenure, merit pay and post-tenure decisions; an important part of the solution to problems caused by vague, unclear and unfair faculty evaluation policies.

Faculty evaluation is defined in Arreola’s Faculty Evaluation System, as well as defending the use of numbers to indicate differences in faculty performance, ”since any faculty evaluation system will involve the measurement of some aspects of faulty performance, numbers will be unavoidable” (p. xviii). The purposes of faculty evaluation are explained as well as how the program can and should be used. This discussion leads to a review of the tripartite requirements of the professorate; teaching, service and research although Arreola’s Faculty Evaluation System replaces the term “research” with “scholarly and creative activities”.

A novel view of college teaching is found in Arreola’s concept of the college teacher as a meta-professional; a notion that teaching college is a profession built “on top” of another. All college faculty have a specific content area in which they received academic preparation. A traditional assumption is that the content mastery will insure good college teaching. Here, Arreola links faculty evaluation with faculty development by showing that faculty evaluation can develop the two areas necessary to successful college teaching; instructional design and instructional delivery. A comprehensive faculty evaluation system provides feedback to college teachers and provides data for personnel decisions.

Common obstacles to establishing successful programs are presented in Arreola’s Faculty Evaluation System as well as reasons why many programs fail. Two such obstacles are administrator apathy and faculty resistance. Guidelines are provided for overcoming obstacles and avoiding errors.

In the preface to this, the second edition, Arreola explains that he served as a consultant to thousands of administrators and faculty from hundreds of colleges and universities. This background led to the development of the proven and reliable eight-step process. The eight steps customize the system to meet the specific needs of various institutions of higher learning. Here are the eight-steps with a brief discussion of each one:

  1. Determine the faulty role model. The first chapter includes a table that lists possible roles with suggested role-defining faculty activities (the author refers to the table a partial list, but it is actually quite detailed and thorough) and a faulty role model worksheet. In this step, the goal is to reach consensus on which of the many faculty activities should be evaluated. The conventional role of teaching, research and service is introduced. This is typically the starting point that must be addressed by the faculty.
  2. Determining faulty role model parameter values. This consists of establishing the relative importance of each role to the institution; determine how much value or weight may be placed on each role in the faculty model. A worksheet to determine the weighted faculty role model is included.
  3. Defining roles in the faulty role model. This step involves coming to a consensus as to how each of the roles is defined. All roles should be defined in the faculty role model in terms of observable achievements, products, or performances that can be documented. In defining the teaching role several aspects about teaching and learning are explored. The most important part of this step is defining teaching as involving four components: 1) instructional delivery skills, 2) instructional design skills, 3) content expertise, and 4) course management. A nice feature of this section is the long list of scholarly and creative activities and service roles available for faculty to choose in constructing the role model. Teaching is defined as engaging in specifically designed interactions with the students that facilitate, promote, and result in student learning. Here, two types of community service are defined; personal and institutional. Additionally, the distinction between service and consulting is made. Arreola introduces “collegiality” as factor in the evaluation of faculty and defines it as how much effect the faculty member has on the productivity of colleagues; does it inhibit or enhance? Mawdsley (1999) argues that collegiality is indeed a factor in tenure decisions and gives twenty-four good reasons; that is how many legal tenure cases he cited in his research on collegiality!
  4. Determining role component weights. Now, the faculty must determine how much relative importance each of the four defining components from Step 3 should have in the evaluation of the teaching role as a whole. The tool that achieves this is the Source Impact Matrix that the author developed to gather the component weights.
  5. Determining appropriate sources of information. The next step is to come to an agreement as to who should provide the information on which the evaluations will be based. Another tool was developed by the author to address this concern; the Source Identification Matrix. This is to avoid complete reliance on student evaluation forms or any other one source of information. It allows inclusion of peer and department head information as well as student evaluation.
  6. Determining information source weights. The principle of controlled subjectivity requires that values must be specific and built into the evaluative process. It must arrive at some consensus as to the credibility of the various sources of information. It is necessary to determine the impact that the information will have on the overall evaluation. To do this the author devised another matrix, the Weighted Source by Role Component Matrix.
  7. Determining how information should be gathered. The author feels that at this point the process moves into the less political and more technical area of measurement. The step includes determining what type of form, questionnaire, checklist or other data gathering procedure or method will be used to obtain the specified information for each source. A matrix was developed to carefully review the roles and help develop an operational plan for development, the Data Gathering Tool Specification Matrix.
  8. Completing the system – selecting or designing forms, protocols, and rating scales. This is the last step in the process – designing questionnaires, forms, procedures and protocols for the evaluation system. This step is what some institutions do in the beginning and thereby doom their evaluation program. This step had no specific recipe for accomplishing the development of the forms and procedures. If all of the previous steps have been followed it is a relatively straightforward technical matter if the faculty strive for objectivity, reliability and validity in the forms.

After the eight-step program, Arreola’s Faculty Evaluation System continues by showing how to generate an Overall Comprehensive Rating (OCR) based on data from the comprehensive faculty evaluation system. The OCR can be used in promotion, tenure, merit pay and post-tenure review decisions.

Arreola’s Faculty Evaluation System next addresses the issues associated with peer review. While acknowledging the significance of peer input, Arreola’s Faculty Evaluation System calls the component one of the biggest sources of problems and confusion. For this reason, several peer review models are offered to aid in the formation of peer review committees. Arreola introduces the concept of the Best Source Principle wherein the faculty evaluation system restricts the use of peers to provide only information that requires a professional perspective or for which the peers are the primary, best source of information.

After citing the many researchers that advise caution in using peer evaluation, especially for personnel decisions, Arreola’s Faculty Evaluation System includes a 1932 study by William R. Wilson, reprinted in 1999, eloquently expressing the underlying concern.

It would be unquestionably a splendid thing if mature and experienced persons could be induced to visit classes and appraise and criticize them. The judgment of an outsider, however, is at best a second-hand impression of the effectiveness of a course. Presumably the mature visitor would appraise the course by better standards than students possess. They would not, however, reveal the effect of the course upon the students who take it. If the students report that the course is interesting and the visitor reports that it is dull, the only conclusion that can be drawn is that the course is interesting to the students and dull to the mature observer. If either set of appraisals is taken as a criterion, the other set is invalid. A distinguished scholar, dissatisfied with the ratings he received from a large beginning class, complained that he was casting pearls before swine. The mature visitor would have agreed. But does the wise swineherd continue to lavish pearls upon his charges after he has found the diet cannot be assimilated? (Wilson, 1999, p.568.)

After peer evaluation issues are thoroughly explored in the book, a section dedicated to the highly controversial topic of student rating forms (forms Arreola claims have been researched for more than 75 years and used regularly and routinely in over 85% of faculty evaluation systems) is presented. An interesting series of questions (and their answers) concerning student ratings is presented based on the research derived primarily from Lawrence M. Aleamoni. (Aleamoni, 1999). Here are some of the questions; the answers might surprise you.

  • Aren’t student ratings just based on a popularity contest?
  • Aren’t student rating forms unreliable and invalid?
  • Isn’t it true that I can “buy” good student grades?
  • Does the time of day the course is taught affect the student ratings?

(All are false, if using valid and reliable students evaluation forms that were professionally developed)

Arreola’s Faculty Evaluation System concludes with case studies and a sample of forms used from studies conducted at two institutions. The bibliography contains hundreds of resources many of which are cited as references for various sections in the development of the book.

Today the tenure system continues to receive criticism as it relates to how analysis of faculty productivity is determined. Further, the issue of how faculty work is evaluated and rewarded, both in the pre- and post- tenure years (Boyer, 1990, Boyer, 1994, Tierney, 1998) will continue to be a central issue in the coming years. It behooves administrators in higher education to develop a sound knowledge base of relevant literature in faculty development and Arreola’s Faculty Evaluation System should be required reading.

References

Aleamoni, L.M. (1999). Student rating myths versus research facts: An update. Journal of Personnel Evaluation, 13, 153-166.

Boyer, E. (1994). Scholarshipassessed. Paper presented at the meeting of the American Association for Higher Education Conference on Faculty Roles and Rewards, Washington, DC.

Boyer, E. (1990). Scholarshipreconsidered: Priorities of the professorate. Princeton, NJ: The Carnegie Foundation for the Advancement of Teaching.

Cooper, L. (1998). The rhetoric of Aristotle. Englewood Cliffs, N.J.: Prentiss Hall.

Glassick, C., Huber, M., & Maeroff, G. (1997). Scholarship assessed: Evaluation of the professorate. San Francisco, CA: Jossey-Bass.

Mawdsley, R. (1999). Collegiality as a factor in tenure decisions. Journal of Personnel Evaluation, 13, 167-177.

Rhoades-Catanach, S. & Stout, D. (2000). Current practices in the external peer review process for promotion and tenure decisions. Journal of Accounting Education, 18, 171-188.

Tierney, W. G. (1998). Academic community and post-tenure review. Academe, 83(3), 23-25.

Wilson, W. R. (1999, September/ October). Student rating teachers.Journal of Higher Education, 70 (5), Copyright 1932, 1999 by The Ohio University.

About the Reviewer

Thomas Diamantes, Ed.D., is an Associate Professor of Educational Administration at Wright State University, Dayton, Ohio. His research interests are issues regarding promotion and tenure, faculty development and school administrator preparation. He teaches graduate courses on campus and for the Teacher Leader Program, a distance-education, web enhanced, master’s degree program. He also advises candidates preparing for supervisor, principal and superintendent licensure.

[ home | overview | reviews | editors | submit | guidelines | announcements ]