25 Women to Read Before You Die

Special Offers see all

Enter to WIN a $100 Credit

Subscribe to PowellsBooks.news
for a chance to win.
Privacy Policy

Visit our stores


Authors, readers, critics, media — and booksellers.


How to Be Objective

ObjectivityObjectivity by Lorraine Daston and Peter Galison

Reviewed by Jan Golinski
American Scientist

All scientists strive for objectivity; they congratulate themselves when they think they have attained it. But what exactly does ob­jectivity mean? Is it a matter of following the right procedures when doing an experiment or making an observation? Or is it an attribute of the person doing science, something like emotional detachment or freedom from personal bias? Or is it something to do with making contact with things "out there" in the world of reality? And what do these different possible meanings have to do with one another? Is there any guarantee that following the proper procedures or having the ability to suppress one's emotions will disclose the truth about the way things really are?

There has been quite a lot of debate about objectivity in recent years, some of it polemical rather than illuminating. On the one hand, some scientists have flocked to the banner of objectivity, hoisting it alongside other banners labeled "truth," "rationality" and "the scientific method" to defend against what they take to be attacks on science itself. On the other hand, there have been critics who declare that scientists' claims to objectivity are a sham, that all purported facts reflect the partial perspectives of those who produce them, that there is no escaping the biases that are due to individuals' interests, background, race and gender.

Lorraine Daston and Peter Galison, two of today's leading historians of science, believe that a historical perspective can cut through these tangled arguments to help us understand what objectivity is -- or at least how it has worked in scientific practice. Their book Objectivity is deeply thoughtful, thoroughly researched and beautifully illustrated. It makes a persuasive case that the modern notion of objectivity emerged only in the mid-19th century. It was then that objectivity prevailed as what the authors call an "epistemic virtue" -- that is to say, a moral attribute of the people who were recognized as makers of knowledge.

Even in a book as big as this one, it's not possible to tell the whole of this story. Instead Daston and Galison approach the topic through examining a particular genre of scientific publication: the collections of images, called atlases, used in such sciences as anatomy, botany, astronomy, physiology, cartography and meteorology. These visual images, many of them beautifully reproduced in color in this volume, were used as reference standards for identifying plants and animals, classifying clouds or galaxies, and mapping the human body or the surface of the Earth. Daston and Galison read all these pictures for what they reveal about the epistemic virtues held by those who made them. Objectivity, on this account, emerged as a dominant scientific ideal with the spread of new techniques of mechanical image-making in the mid-19th century -- especially, but not exclusively, photography.

Before the rise of this "mechanical objectivity," the 18th century celebrated an ideal that Daston and Galison name "truth-to-nature." Botanists and anatomists of the Enlightenment, for example, tried to bring out the fundamental uniformity of nature concealed beneath its apparent diversity. They ignored the idiosyncrasies of individual specimens of plants and animals in an attempt to discern their underlying plan. Doing this required a compound of philosophical acumen and aesthetic taste, abilities that were thought to come more easily to independent gentlemen than to women or members of the laboring classes. Working artists were therefore supposed to take direction from learned naturalists, whose ability to see truth-in-nature was supposed to guide the artist's hand. The social hierarchy implied by this model was already being challenged in the late 18th century, but, as Daston and Galison note, the ideal of truth-to-nature continues to inform much scientific illustration to this day. Botanical or ornithological field guides show not a particular plant or bird, but rather a composite -- an ideal member of each species. For the purposes of taxonomic identification, artists' ability to display the underlying uniformity of nature is still valued.

With the arrival of photography, a new kind of scientific image became possible: one in which individual specimens recorded their own traces, apparently without significant human intervention. This advance went along with an epistemic virtue that prized the mechanical recording of idiosyncratic details and assigned a passive role to the investigator. Daston and Galison give as an example pictures of snowflakes: Eighteenth-century drawings of them are idealized and symmetrical, whereas collections of photographs show that individual snowflakes are never perfectly proportioned. The authors are very insistent, however, that the ideal of mechanical objectivity cannot be identified with the technical innovation of photography.

For one thing, other techniques for mechanical recording, such as kymographs (which record changes in pressure by means of a stylus marking a rotating drum), were also pressed into the service of this ideal. And for another, photography itself can serve alternative ends, as when Francis Galton, the founder of eugenics, assembled composite photographs of people that were meant to show ethnic or social "types." Galton believed in something like the old truth-to-nature ideal, although his technique used photography rather than drawing, and his specimens were not plants but immigrants and criminals from the streets of London.

Mechanical objectivity was more than just a technique; as an epistemic virtue it demanded certain qualities of the investigator, or as he was coming to be known (and it typically was a he), the "scientist." As an ideal, the 19th-century scientist was supposed to be self-disciplined and self-effacing, acting in a machine-like way that did not express individual subjectivity or emotion. As Daston and Galison note, ancient traditions of asceticism and self-cultivation -- or what the philosopher Michel Foucault called "technologies of the self" -- were drawn upon in forming this ideal.

In this connection, the authors' work overlaps with Rebecca Herzig's fascinating book, Suffering for Science(2006), which they do not cite. Herzig shows just how extraordinary was the suffering to which 19th-century scientists subjected themselves in the pursuit of knowledge. X-ray pioneers who willingly surrendered their limbs to the rays knowing that they risked injury, and polar explorers who ventured to their deaths in the Arctic, for example, displayed the extremes of heroic self-sacrifice that the ideal seemed to demand. Only by resolutely suppressing the scientist's self, it was thought, could knowledge be freed from all taint of subjectivity and rendered purely objective.

It is not really surprising that such a strenuous notion of epistemic virtue should have been too much for many to live up to. By the end of the 19th century, mechanical objectivity was being called into question as it began to appear that the traits of individual observers and experimenters could not be entirely excluded from their scientific findings. One response was the resort to what Daston and Galison call "structural objectivity," the trend in mathematics, physics and philosophy to exclude images and ordinary language entirely from scientific discourse. If the laws of physics or arithmetic could be reduced to the form of relations within a purely logical structure, then perhaps science could conquer individual subjectivity. Such, at least, was the hope of Henri Poincaré, Gottlob Frege, Rudolf Carnap and others. An alternative response was to make a new epistemic virtue from necessity, recognizing the impossibility of pure mechanical ­objectivity and formulating a new ideal that the authors call "trained judgment."

In the 20th century, it came to be accepted that personal traits will always influence scientific observation but that they can nonetheless be cultivated to yield reliable knowledge. The training of laboratory personnel to interpret particle tracks on cloud-chamber photographs or lines on stellar spectra acknowledges that individual judgment is inescapable in science. For the same reason, hand drawing is favored over photography for certain tasks, such as diagramming brain lesions or mapping the Moon's surface. These skills require that investigators cultivate individual traits (in this case, their powers of observation and drawing ability) as a component of their expertise, rather than suppressing their individuality to strive after absolute objectivity.

Daston and Galison's survey traces the history of the epistemic virtues that have left a legacy in our current muddled notions of objectivity. At the end, they look forward briefly to the development of the fields of virtual reality and nanotechnology, in which we can expect new notions of the character of scientific knowledge and its moral dimension to arise. But their primary focus is historical, in a book that is both remarkably ambitious and strategically limited in its scope.

The authors have an astonishing command of the historical record of the sciences in several nations and periods. They move with facility between profound philosophical analysis and detailed accounts of the practices of many scientific disciplines. They give lucid accounts of the philosophical subtleties of Immanuel Kant and Foucault and of the meticulous technical work in such fields as neurophysiology and astronomy. Their aim is nothing less than to bring to light some of the fundamental structures of scientific knowledge over the past few centuries.

Still, Daston and Galison acknowledge that they have not been able to tell the whole history even of one epistemic virtue. Objectivity is not the only such virtue, and approaching it through the study of scientific images is not the only possible way. In the 20th century, social scientists and quantum physicists as well as philosophers have debated the topic of objectivity. Practitioners of the physical and social sciences have argued vigorously about whether they have achieved an objective understanding of the things they are studying. They have sought objectivity by methods very different from the creation of visual images -- for example, by statistical analysis. Methods for analyzing quantitative data offer an alternative path to the goal of objectivity, one that historians have shown was just as important as the use of images.

Daston and Galison's book will take its place among the most distinguished histories of the making of scientific knowledge. In recent years, scholars have been uncovering the historical roots of many of the elements conventionally seen as part of the scientific method: facts, experiments, proof, evidence, quantitative reasoning and so on. Daston and Galison advance these inquiries a good deal, especially because their conception of these methodological precepts as "virtues" links issues of practice to those concerning the identity of the practitioner. As they show in connection with objectivity, particular epistemic virtues demand a particular kind of scientific "self." When factual claims are evaluated, the persona of the scientist is as much at stake as the procedures that were followed. And the cultivation of the self -- the formation of the scientific identity -- is as much a part of the history of science as the development of methods or the growth of the knowledge they produce. Daston and Galison have provided an outstanding model of a history that attends both to scientific methods and to scientists' self-cultivation.

Jan Golinski is professor of history and humanities at the University of New Hampshire, where he currently serves as chair of the Department of History. His books include Making Natural Knowledge: Constructivism and the History of Science (second edition, 2005) and British Weather and the Climate of Enlightenment (2007), both published by the University of Chicago Press.

Books mentioned in this post

2 Responses to "How to Be Objective"

    eglazier November 2nd, 2008 at 6:14 am

    when the authors discuss the trend to objectivety in the 19th century, it was then that the fashion to write scientific papers only in the passive voice so as to divorce the work from the worker became standard; things were done, not the worker did this. this was in fashion until the late 20th century when a change to writing in the active voice, i.e. i did this, began and now all but the most stilted writing is done that way. as an author's editor for the past 15+ years i have found it to be sometimes an uphill battle but one worth fighting for against a few journal editors who are still pretty much asleep and ignoring modern trends.

    the authors of the book reviewed have noted michel foucault as a philosopher who wrote about science. foucault's views might be ignored for he was one of a group of postmodernists, whatever in hell that might mean, whose views have been shown to be specious by real scientists. alan sokal, a physicist at NYU, made this quite clear when he had a paper published in a leading journal of postmodern philosophy that was sheer gibberish made up using the words of these philosophers but strung togther in a rather random manner so they only appeared to make sense ' full of sound and fury, signifying nothing'. for some reason the claptrap of postmodern philosophy had attracted a large following by people who seemed to be enamored of the written word even when the writing had no real meaning. there may be people who believe what the postmodernists wrote about science but its relation to science was like comparing a seabass to a bicycle; nonexistent.
    having said that it is also true that objectivity in science in many ways has been dumped. witness some of the scientific fraud that has been published in the past couple of decades or the complete nonobjectivety of some leading scientists when fame or money are at stake. a prime example is a leading scientist at the NIH who claimed to have discovered the hiv virus and the test to detect this when in fact he had used a virus sample that he had gotten from the french and claimed it for himself. recently the french scientist was given an award putting the american's claim to rest, though it has long been known that the claim was false and that he had lost his position at NIH. the affair is detailed in the book 'and the band played on' written about the early years of the discovery and political machinations about the aids epidemic.

    s h a r o n November 2nd, 2008 at 7:48 am

    I have been trained in the "scientific method". This review elicited more than a few exasperated sighs as I faced the same old objections [droll that that word shares so much with the word "objective"].

    To do science is to approach the examination and study of various entities and relationships with the goal of establishing theories which are "disprovable". Consider this goal in contrast with one which claims that "such and such causes so and so, that's my story and I'm stickin' to it." No invitation there to supply evidence to the contrary.

    The scientific method does rely on observation--usually by a human but sometimes by means of a machine; but in either case, the basic goal of observation is to document former and subsequent events, sometimes in the context of an intervention by either the scientist or a natural event. The goal is to APPROACH a confidence level of some degree in cause and effect. Science requires confidence levels because it assumes you can only APPROACH 100 percent confidence that anything CAUSES something else--You will never hear a real scientist claim that such a thing causes that result. Nope. Science approaches this goal but never realizes it. Its approach is by means of employing strict METHODS in experimentation.

    Some folks are really concerned about what causes this or that. Scientists are, you could say (ahem) obsessed by this--they are obsessed by the goal of doing experiments that are devoid of bias on any number of measures. It really cannot be done; but, again, it is under the rigors of approaching this goal that the scientist carries out her experiments.

    Science does not seek "truth". Really. It seeks meaning, and humans need to feel/see/experience meaning in their lives, otherwise, they languish or go crazy.

    Much of science demonstrates little more than the egos of scientists; however, much interaction among scientists who disagree with one another's findings leads to more and more investigation. An experiment by one or a group of scientists can quickly be debunked by others if, using the same procedures, the experiment cannot produce the same findings as the original--that is, if the experiment and its results cannot be replicated. This is the consequence of rigor: Rigorous science is not truth, but it is carried out such that replications following strict guidelines (such as uhm, replicating exactly the inputs and outputs of the experiment or investigation) are possible.

    Readers interested in the scientific method and what it does and does not claim should check out the information on Wikepedia. As of this date, I found the information there useful and I think readers of this book or who have a general interest in science will greatly benefit from the documentation there. (On the general search page for Wikepedia, I entered the search term "scientific method".)

    The decades- (centuries?) old fistfights between the claims of "impersonal" science/empirical observation and that of the "mushy feel-good" ohms of the science-deniers is a non-argument. You will rarely if ever hear a scientist, or someone who adheres to the tenets of science, claiming that one thing causes another. What you will here or see is a pile of "evidence" which the hearer or seer is welcome to debunk USING THE SAME METHODOLOGY. Science is ALWAYS tentative. Science understands the pitfalls rife in human observation.

    But rather than go on and an about this, consider the vast numbers of folks who believe in God and who commit violent acts on others whose gods do not conform to their own vision of him/her/it. What is lacking in any of these arguments and within the belief of god(s) is EVIDENCE. But no need to make a bunch of folks mad here, you can apply this rigor to a thousand claims heard every day--especially THESE days of the presidential campaign (or any campaign for anything). Try this: Every time you hear an assertion (you do know what "assertion" means?), apply this test: If possible, ask the person or entity who makes the assertion, "What is your evidence for that"? Such an approach can suck up a bunch of time, and it does, indeed, for me. If I hear an assertion for which I doubt there is evidence, I have to go check out what evidence is claimed or if any can be found. (More often than not, no evidence that is verifiable is provided by the asserter.) This means you have to do the damned research yourself: checking sources--and applying rigor to arrive at your own determination of whether the sources are credible, "reliable", and can provide the evidence you need for your own comfort level in accepting the veracity of the original assertion. (The Merriam-Webster definition of "reliability" is the extent to which an experiment, test, or measuring procedure yields the same results on repeated trials.) Now, admittedly, you can judge one individual's statement as reliable by using this test: you find that if you ask fifty people whether McCain did this or Obama said that. You may get enough of the same answer that you can assume reliability. But think about that: Do you accept as truth a given assertion because 100 people agree that it's true? I hope not.

    If you're dealing with determining cause and effect, things get considerably mushier. In trying to determine what is the cause of something, only the scientific method will get you close to a reliable conclusion: several replications of the same input under the same circumstances result in the same output. Always remember that the primary goal of the scientific method in experimentation is to DISPROVE. Odd, isn't it? But if a certain outcome claimed through one scientist by means of an experiment (manipulating inputs and outputs) CANNOT BE REPLICATED or DISSPROVEN, the original scientist's outcome simply remains as a working hypothesis. A particularly exasperating and embarrassing scientific procedure involves one scientist's replication of another's experiment. If that cannot be done, you're dead in the water. If the experiment can be replicated because all of the details of the original procedures are adequately documented, but the result is different, you have disproved the original results--that's all. [brushes off hands and goes to replicate the next experiment's findings]

    Now try that method on this assertion: God is the cause of all things.

    Thus, the basic ASSUMPTION of science is that there is no cause and effect relationship between any two events UNLESS you can prove under rigorous scientific conditions (again, check Wikepedia for what these conditions are) that such-and-such a result reliably FOLLOWS such-and-such an input.

    This means that one can, in the absence of credible evidence, be SKEPTICAL. Skepticism is not a sin, folks. It is a good thing to have about your observations all day long, a good thing to apply to what anyone anywhere has to assert about anything.

    BELIEVE nothing. (The emphasis is on the first word.)

    My face is not covered with egg now--my whole body is pummeled with rotten fruit and vegetables. This is how we humans carry out our life as we try to impose meaning on it. I prefer evidence to eggs or rotten tomatoes.

Post a comment:

Get Your Gravatar

  1. Please note:
  2. All comments require moderation by Powells.com staff.
  3. Comments submitted on weekends might take until Monday to appear.
PowellsBooks.Blog uses Gravatar to allow you to personalize the icon that appears beside your name when you post. If you don't have one already, get your Gravatar today!
  • back to top


Powell's City of Books is an independent bookstore in Portland, Oregon, that fills a whole city block with more than a million new, used, and out of print books. Shop those shelves — plus literally millions more books, DVDs, and gifts — here at Powells.com.