This review has been accessed times since July 6, 2001

Bowers, C. A. (2000). Let Them Eat Data: How Computers Affect Education, Cultural Diversity, and the Prospects of Ecological Sustainability. Athens, GA: The University of Georgia Press.

Pp. 216

$18.95 (Paper)         ISBN 0-8203-2229-6

Reviewed by Bryan R. Warnick
University of Illinois at Urbana-Champaign

July 6, 2001

        The costs of using computers in the classroom are various and real. To his credit, C.A. Bowers has been one of only a few voices in educational literature to cast a critical eye on the uses and abuses of computers in the classroom. Although now there are more voices joining Bowers in his criticism (Clifford Stoll is an example of another prominent critic), there are still too few. Indeed, I recently reviewed the New York Times's archive of articles dealing with technology use in education and found very few articles that were anything but celebratory in tone. Given this lack of critical discussion, the real costs of using computers in educational settings remain almost completely untallied. In his latest book, Bowers has gone a long way toward achieving a deeper understanding of how using computers in classrooms affects students as he places the computer in a broader context of environmental and cultural degradation. This is itself a service. In addition, as he develops his critique, Bowers outlines a method that may be helpful in not only analyzing the relationship between educational technology and environmental destruction, but may also be useful in looking at more general issues in education.
        Bowers's main thesis is that computer use reinforces the attitudes of the Western Industrial Revolution, and that this, in turn, leads to environmental corruption. His stance is fairly pessimistic, and he seems to find only a little hope that using computers as educational tools will yield anything but negative consequences. Although I will be critical of some of his more pessimistic conclusions, I believe Bowers's focus on the cultural experience of computer use itself is an important contribution to the study of technology in education. His analysis deals with a topic that is rarely broached in educational discourse, and Bowers's book is timely in filling this lacuna of discussion.
        In his introduction, Bowers outlines some of the global environment's most pressing problems: exploding overpopulation, exhaustion of agricultural land and fisheries, global warming, hazardous waste, and harm caused by synthetic chemicals. These environmental problems, Bowers argues, will be the major headache of the 21st century, and they are already having a demonstrably detrimental impact on human health. The first half of the book, comprising chapters one to four, reveals the general complicity of the computer in worsening environmental problems; the remainder of the book examines this complicity with specific reference to educational concerns. Bowers attempts to end the book on a more positive note, exploring ways of reversing the environmental and cultural damage wrought in an environment of technological ubiquity.
        Bowers, of course, does recognize that computers have been used for environmental action and cultural conservation, but he quickly dismisses such activities as unable to justify computer use in education. For Bowers, programs posing as environmental learning software merely further the illusion of computational neutrality with respect to the environment that may deflect attention from the real problems inherent in the learning technology. He reviews how the cyberworld has been glowingly represented to the public by such writers as Sherry Turkle (1995) and Howard Rheingold (1991). These writers have praised the attitude of decentered subjectivity encouraged by the computer, and the on-line community creation made possible by the Internet. Bowers, however, shows the darker side of these same attitudes and thought- patterns. He argues that the subjective, decentered attitudes hailed by computer enthusiasts as personally liberating, are in reality culturally and environmentally destructive, and reducible to a devil-may-care individualism. He also adds to the list other attitudes that are reinforced by computer use — moral relativism, a disregard for “local” knowledge, anthropocentrism, and other such demeanors — and shows how these ways of thinking play a role in thwarting the prospects of ecological sustainability.
        As a mechanism for how the computer reinforces these ecologically unsound postures, Bowers turns to metaphor theory. Western industrial culture is based on what he describes as certain “root” metaphors that embody the culture's assumptions about human beings and their place in the world. The root metaphors of Western culture stem from the Book of Genesis which, according to Bowers, “extols the mythopoetic narrative that led to viewing men as dominant over both women and the environment” (p. 27). By looking at more recent history, Bowers traces our current fascination with materialist, technological lifestyles back to the birth of Modernity and its mechanistic view of life processes. Of particular concern is the master narrative of organic evolution which has served to spawn the social Darwinism often supported by computer advocates generally, and artificial intelligence enthusiasts specifically.
        A culture's root metaphors give coherence and unity to cultural systems; they are an often unnoticed, though omnipresent, feature of a culture's linguistic and ideological heritage. New dimensions of experience are understood and structured by analogic connections to these root metaphors. Thus, we understand a new experience by relating it to the familiar schema, or, in other words, by relating the new experience to preexisting frameworks that organize the concepts of mental life. With few exceptions, Bowers describes this process quite clearly, although he is a little stingy in providing evidence for these claims. He merely asserts such important declarations matter-of-factly and moves on. He does, however, seem to agree with the research of linguist George Lakoff and philosopher Mark Johnson's work in conceptual metaphor theory, although he mentions them only to criticize them for not going far enough in explaining what he calls “iconic” metaphors. While there does seem to be support for Bowers's view, he does not find it necessary to cite much of the relevant research.
        How does Bowers relate his theory of metaphor to computer use? Since root metaphors pervade nearly every aspect of culture, these metaphors structure both how a culture's technology is formed and, later, how that technology is understood and experienced. Thus, technology is a carrier of a culturally specific set of values and world-views. The experience of technology is an encounter with the cultural world-views surrounding the technology's creation, and this experience serves to reinforce that particular world-view and to obscure any competition. The experience with a technology strengthens certain attitudes while cloaking the possibility of alternative ways of thinking.
        For Bowers, the attitudes that are reinforced by computer technology are those of Western industrial culture, and as such are attitudes that have grave implications for the environment. Computers reinforce ways of thinking that favor:
explicit and decontextualized knowledge (data, information, and models, with not clear authorship); subjective judgement and individual autonomy; language as a conduit of sender-receiver communication; subjective experience of temporality, where the value of cultural traditions and responsibilities to future generations is individually determined; instrumental and subjective morality; and human-Nature relationships dominated by anthropocentrism. (p. 158)
As evidence for the claim that the experience of the computer perpetuates these cultural attitudes, Bowers points to the writings of enthusiasts of technology and science that contain representations and endorsements of these attitudes. On pages 29-33, he cites specifically the work of Francis Crick, Howard Rheingold, Michael Benedikt, Sherry Turkle, and Nicholas Negroponte. Bowers gives examples where these authors describe and celebrate the positions listed above.
        Bowers pays particular attention to how Western culture's individualism has structured technology and how technology, in turn, reinforces an sentiment of individualism:
With the exception of the near-total immersion of virtual reality, the experience of self-other relationships in cyberspace amplifies the culture of autonomous individualism in various ways. The words, graphics, and images of the screen represent decontextualized forms of text that require individual interpretation and analysis. (p.34, emphasis added)
In short, because the realm of cyberspace is decontextualized from natural constraints, it is subject to the individual will — everything is controllable by the will of the solitary computer user. Since nothing is moored to a preexisting context, everything may be rearranged according to individual preferences.
        Bowers is surely right that the experience of the computer reinforces an attitude of autonomous individualism. Other sources can be brought in to further establish this point. The experience of a computer-mediated environment is said to facilitate the possibility of absolute individual creation. Computer films such as Tron (1982) and more recently, The Matrix (1999), portray computer users as having magical powers within the computer world — limits and boundaries are nonexistent and reality depends on the whims of the user. Studies of the ethics of groups of programmers, e.g., Hackers, repeatedly show a strong individualism — an individualism that, it seems, has been reinforced by exposure to cyberspace. Often, the reason given for this attitude is similar to the reason offered by Bowers: it is attributable to the malleable nature of cyberspace. “Cyberspace represents,” writer Scott Bukatman tells us, “a completely malleable realm of transitory data structures in which historical time is measured in nanoseconds and spatiality somehow exists both globally and invisibly” (1993, p. 18). There is an “absolutism of powers which are imagined to inhere within an electronic reality” (104). Cyberspace is not limited by the context of natural laws, as Joseph Weizenbaum points out:
An engineer is inextricably impacted in the material world. His creativity is defined by laws; he may, finally, do only what may lawfully be done. The computer programmer, however, is a creator of universes for which he alone is the lawgiver. . . .[U]niverses of virtually unlimited complexity can be created in the form of computer programs. Moreover, and this is a crucial point, systems so formulated and elaborated act out their programmed scripts. They compliantly obey their laws and vividly exhibit their obedient behavior. . . .[T]he corruption evoked by the computer programmer's omnipotence manifests itself in a form that is instructive in a domain far larger than the immediate environment of the computer. (1976, p. 115)
Other testimony could be given, and it seems as though there is a chorus of voices supporting the idea that the experience of a computer, with its total subjection to the individual will, reinforces this particular attitude of individual supremacy. Thus, Bowers's description of the attitudes reinforced by computer use, and the methods he uses to explain this process of reinforcement, both seem supportable.
        But why would these attitudes be troubling to those concerned about ecological sustainability? Bowers answers this question in his third chapter by arguing that the experience of computers is the replacement of “local knowledge” with data. He writes, “To digitize thought and aesthetic expression is to abstract them from their multilayered cultural and ecological contexts” (54). In contrast, ecological sustainability demands an intimate knowledge of context — a knowledge of place and of the culture that has learned to live in that place. The abstract computational impulse works against local knowledge, that is, against personal familiarity with a place's streams, grasses, soil, trees, weather. This is the knowledge that becomes vital in directing responsible human activity, such as intelligently situating a house, sewing crops, and preserving plant and animal diversity. Bowers continues:
Knowledge of place, when it is deeply embedded in personal experience and understood as an intergenerational responsibility, also includes knowing who were the earlier inhabitants, their technology and economy, and the mythopoetic narratives at the base of their moral community. It also involves knowledge of immediate ancestors and what they learned or failed to learn as they build their community on the moral and conceptual baggage they brought with them in their immigration. We receive this knowledge through stories of their previous experiences with the land. (p. 64)
The essential knowledge of how to take care of a particular environment is learned from the people who have come to grips with the demands of the land. Thus, the experienced members of a cultural group, those who possess the “elder knowledge,” hold the key to ecological sustainability in any given region. This is the knowledge, though, that is devalued by science and computers. It is not easily abstracted, generalized, digitized, or turned into data. The individualism reinforced by computers reacts against cultural restraint. Computers, in demanding abstraction and promoting individual autonomy, devalue local, elder knowledge.
        Certainly, Bowers arguments are strong in many ways. Where Bowers goes wrong, however, is in taking too limited a view of cultural experience perpetuated by computer technology. One may look to other sources for a more complete picture. As one fills out the picture, it becomes clear the experience of computational technology may not entirely lead destructive environmental attitudes. As an example to a more complete picture, I turn to David Bennahum's account of “coming of age” cyberspace. He writes of himself and his young, technologically savvy friends, that as they began to use technology more and more, they:
were seeing individual objects as part of larger objects that in turn formed a complex whole. The same kind of thinking would be reinforced later as we discovered computers, which came as both mechanical systems (connections of printers, disks, monitors, and central processing unit) and virtual systems (connections of separate programs within an operating system) . . . . These emergent systems, from ATARI-DOS to credit reports, medical records, air-traffic control, video games, word processors, font design software — all the infinite permutations of what a digital computer can create — were certain principles, a way of thinking. Paramount was the sense that everything in the world was based on interlocking systems, and systems of systems. This was reflected in the architecture of computers and software. (pp. 37, 101)
This thought-pattern, what has been called “systems thinking,” is commonly mentioned when the experience of the computer is discussed. The computer, it seems, creates a feel for how various components of a system fit together into a larger, interdependent whole. Under Bower's own method, we could say this experience creates a metaphor, a metaphor of interconnected systems, which is then used to structure the larger world. Under this metaphor, the world is seen as a computer system — a complex whole that depends on the interworkings of its various sub-systems.
        One does not need to think very hard to determine that this “systems metaphor” may be profoundly ecological; it focuses on relationships between essential parts and the larger whole. Water, air, animals, cultures, and plants are all parts of an interdependent system, and to make sound environmental decisions one must recognize this interconnection among parts. Computer use, by reinforcing the systems metaphor, strengthens a world-view that recognizes this interdependence, and thus contributes to understanding ecological systems. In this regard, the computational experience would seem to reinforce sound environmental thinking. Bowers may still want to object, though, that a metaphor which portrays the world as an interconnected computer system is environmentally unsound because it suggests that one can manipulate ecological systems like one manipulates computer systems. Whatever the case, it seems that the interplay of computational metaphors is complex, and at the very least it appears that we should be cautious about classifying these metaphors as wholly good or bad.
        I am also uncomfortable with granting Bowers's premise that the attitude of individual autonomy is entirely destructive to ecological thinking. This trait is destructive to ecological sustainability because, for Bowers, an autonomous individual is not subject to tradition, and cultural tradition often contains the wisdom of how to live in harmony with local environments. This might serve as a reason to keep children away from computers in a society where there does exist a tradition of ecologically sustainable elder knowledge. But in 21
st century Western culture, there is little in the way of environmentally sound “elder knowledge” possessed by the dominant culture. The prevailing ideology is one of consumerism and environmental exploitation. Now, it seems that in this sort of world we would want students to be cultural rebels of sorts, that is, to be autonomous people not sucked away into the ideology of consumer culture. Thus, limiting the formation of an ethos of autonomy may not be the best policy if one wants to stay the tide of environmental destruction. Environmental action is accomplished by political action. To act politically is to act autonomously — to act politically means to act, as individuals or as community, and not to be acted upon. Thus, the ethos of autonomy reinforced by the experience of the computer may be beneficial to the environment in some ways, while at the same time promoting destructive attitudes.
        The second half of Bowers's book deals more specifically with education. Bowers complains, rightly, that the discussion of computers in schools on both local and national levels has been too narrow, and he specifically cites the lack of discussion about cultural and environmental issues. Of particular interest to educators is Bowers's attack on constructivism in education (a constructivist approach is said by many to be enhanced by computer technology). Constructivism, which Bowers understands to be the view that “students learn by actively constructing rather than acquiring knowledge,” is both misleading and harmful. It is misleading for Bowers because it ignores the fact that, if children learn any sort of symbolic medium, like language, they are being socialized into that culture's world-view. Thus, by participating in a culture's symbolic heritage, the student is not really constructing knowledge but instead is being sculpted from the outside. Furthermore, constructivism is harmful because, “The emphasis on the child as constructor of knowledge appears to support liberal assumptions about freedom, progress, individualism, and an anthropocentric world” (148). For Bowers, these views marginalize cultural ways of knowing. These are controversial claims, indeed, and whether Bowers is ultimately right or wrong (or right and wrong), he has done the educational community a service by asking new questions about a popular educational idea.
        Bowers does not focus entirely on the computer experience itself, and does discuss some educational software. The titles that come under his critical gaze include: Storybook Weaver, DynoPark Tycoon, Oregon Trail II, SimCity 2000, SimLife and Environmental Education Toolbox. Like the computer experience itself, these software titles suffer from the problem of being based on culturally specific assumptions. Often these programs give primacy to a child's subjective impulse instead of focusing on cultural or environmental context (as in Storybook Weaver), or they portray success in life wholly in terms of market value and profit (as in DynoPark Tycoon) or in terms of placing one's culture on a foreign land (as in Oregon Trail II). Even the more environmentally-oriented programs (such as SimLife and Environmental Education Toolbox) suffer from serious flaws. Most prominently these programs often give the impression that complex systems can be scientifically managed; this attitude is for Bowers part of the problem. Abstract, decontextualized science is not the answer; cultural knowledge gained while living in a place for centuries is the answer. Thus, Bowers urges teachers who use these programs to teach against them, that is, to examine critically the assumptions that may work against ecological sustainability.
        At this point, though, Bowers curiously seems to de-emphasize what he had argued in the previous chapters. When speaking of how to improve classrooms for ecological sustainability he writes, “[One] approach would be to introduce a comparative cross-section of culturally based ecological practices into the design of educational software,” and complains, “students' encounter with the thought processes and values of educational software designers is too often a source of miseducation” (p. 175). The suggestion (and the complaint) here seems to be that software could be designed in ways that promote critical environmental reflection. The first section of the book, however, argued that the experience of the computer itself, apart from any software it is running, undermines ecologically sound attitudes. Under this first view, it would be impossible to eradicate miseducation at the level of the software. But Bowers is probably just being realistic here. Computers are not going anywhere, and since they are not, it is better to work within the given framework to undermine, as best one can, environmentally destructive attitudes. A more radical view, though, which more neatly follows from Bowers analysis, is the complete rejection of computer technology in education as it currently exists.
        Bowers ends his book with seven important points for educators. These seven points sum up his major arguments. He argues that we should be aware: (1) of the differences between Western technologies and more ecologically sound cultures, (2) of alternative approaches to technology when making democratic decisions involving technology, (3) that further study is needed on how modern technology changes culture and commodifies relationships, (4) that a more complex view of culture is needed than what is currently presupposed by modern technology enthusiasts, (5) that technology affects language and thought patterns, (6) that issues of justice arise when technology and the nature of work intersect, and (7) that we should understand how the computer carries cultural assumptions that threaten diversity and sustainability.
        This is sound advice. More importantly, though, it seems that the main contribution of this book is moving the discussion about technology in the classroom toward dealing with how technology affects language, metaphor, attitudes, and thus, the social world. Computers, Bowers helps us realize, are themselves educators. Computers make moral, political, cultural, and environment arguments, and it is time that these arguments were discussed in the educational community. Bowers has shown how such a discussion might proceed by focusing on the arguments computers make in the domain of ecology. But this is only one domain of human interest, and the rest remains unexplored.


References

Bennahum, D. S. (1998) . Extra Life: Coming of Age in Cyberspace. NY: Basic Books.
        
Bukatman, S. (1993) . Terminal Identity: The Virtual Subject in Post-Modern Science Fiction. Durham, NC: Duke University Press.

Lakoff, G., & Johnson, M. (1980) . Metaphors we live by. Chicago, IL: University of Chicago Press.

Rheingold, H. (1991) . Virtual Reality. NY: Summit Books.

Stoll, C. (1999) . High-Tech Heretic. NY: Anchor Books.

Turkle, S. (1984) . The Second Self: Computers and the Human Spirit. NY: Simon and Schuster.

Turkle, S. (1996) . Life on the Screen: Identity in the Age of the Internet. NY: Simon and Schuster.

Weizenbaum, J. (1976) . Computer Power and Human Reason: From Judgement to Calculation. San Francisco, CA: W.H. Freeman and Company.

About the Reviewer

Bryan R. Warnick is a doctoral student in Philosophy of Education at the University of Illinois at Urbana-Champaign. His research interests include Philosophy of Technology, Ethics, and Philosophy of Mind as these disciplines relate to educational theory and practice. He holds a B.S. degree in Philosophy and Psychology from the University of Utah.

[ home | overview | reviews | editors | submit | guidelines | announcements ]