December 26, 2009

How scientists fail and succeed

There's a great article out in the latest Wired about how scientists deal with conflicting results and about the reality of the scientific process - Accept Defeat: The Neuroscience of Screwing Up. Jonah Lehrer usually gets it right.

It appears as if science is not the precise... well, science, that it's painted out to be. Results only rarely confirm the hypotheses that you set out to confirm, and a great amount of data are either conflicting or make no sense at all. Scientists, it seems, are just as ready as anyone else to reject those observations that conflict with their own preconceptions. The question I'm left with is if this really is so surprising, or a "failure" of the scientific process.

The heart of the article is the work on the psychology of science carried out by Kevin Dunbar:

Kevin Dunbar is a researcher who studies how scientists study things — how they fail and succeed. In the early 1990s, he began an unprecedented research project: observing four biochemistry labs at Stanford University. Philosophers have long theorized about how science happens, but Dunbar wanted to get beyond theory. He wasn’t satisfied with abstract models of the scientific method — that seven-step process we teach schoolkids before the science fair — or the dogmatic faith scientists place in logic and objectivity. Dunbar knew that scientists often don’t think the way the textbooks say they are supposed to. He suspected that all those philosophers of science — from Aristotle to Karl Popper — had missed something important about what goes on in the lab. (As Richard Feynman famously quipped, “Philosophy of science is about as useful to scientists as ornithology is to birds.”) So Dunbar decided to launch an “in vivo” investigation, attempting to learn from the messiness of real experiments.

He ended up spending the next year staring at postdocs and test tubes: The researchers were his flock, and he was the ornithologist. Dunbar brought tape recorders into meeting rooms and loitered in the hallway; he read grant proposals and the rough drafts of papers; he peeked at notebooks, attended lab meetings, and videotaped interview after interview. He spent four years analyzing the data. “I’m not sure I appreciated what I was getting myself into,” Dunbar says. “I asked for complete access, and I got it. But there was just so much to keep track of.”

Dunbar came away from his in vivo studies with an unsettling insight: Science is a deeply frustrating pursuit. Although the researchers were mostly using established techniques, more than 50 percent of their data was unexpected. (In some labs, the figure exceeded 75 percent.) “The scientists had these elaborate theories about what was supposed to happen,” Dunbar says. “But the results kept contradicting their theories. It wasn’t uncommon for someone to spend a month on a project and then just discard all their data because the data didn’t make sense.” Perhaps they hoped to see a specific protein but it wasn’t there. Or maybe their DNA sample showed the presence of an aberrant gene. The details always changed, but the story remained the same: The scientists were looking for X, but they found Y.

Dunbar was fascinated by these statistics. The scientific process, after all, is supposed to be an orderly pursuit of the truth, full of elegant hypotheses and control variables. (Twentieth-century science philosopher Thomas Kuhn, for instance, defined normal science as the kind of research in which “everything but the most esoteric detail of the result is known in advance.”) However, when experiments were observed up close — and Dunbar interviewed the scientists about even the most trifling details — this idealized version of the lab fell apart, replaced by an endless supply of disappointing surprises. There were models that didn’t work and data that couldn’t be replicated and simple studies riddled with anomalies. “These weren’t sloppy people,” Dunbar says. “They were working in some of the finest labs in the world. But experiments rarely tell us what we think they’re going to tell us. That’s the dirty secret of science.”

This is exactly right, although I wouldn't go as far as calling it a "dirty secret". Isn't it common knowledge that many great advancements in science have been brought about by lucky accidents or by unexpected observations? The fiercely tentative nature of the scientific process may produce a lot of confusing failures, but it also introduces lucky serendipity into it, and I don't think we should want to be without it. The dirty secret might be that scientists tend to behave just like any other person would - rejecting the surprising anomalous findings in favor of those that confirms what they wanted to confirm.

Although we pretend we’re empiricists — our views dictated by nothing but the facts — we’re actually blinkered, especially when it comes to information that contradicts our theories. The problem with science, then, isn’t that most experiments fail — it’s that most failures are ignored.

But this is only a problem when you regard science as a solitary practice, as a process that exists in a frictionless vacuum. In reality science is a community effort. The scientific process is not only about theory and hypothesis, predictions, experiments and analyzing results, it should also be considered within its social context. Science is the whole construction that ensures that our subjectivity doesn't get the best of us.

While the scientific process is typically seen as a lonely pursuit — researchers solve problems by themselves — Dunbar found that most new scientific ideas emerged from lab meetings, those weekly sessions in which people publicly present their data. Interestingly, the most important element of the lab meeting wasn’t the presentation — it was the debate that followed. Dunbar observed that the skeptical (and sometimes heated) questions asked during a group session frequently triggered breakthroughs, as the scientists were forced to reconsider data they’d previously ignored. The new theory was a product of spontaneous conversation, not solitude; a single bracing query was enough to turn scientists into temporary outsiders, able to look anew at their own work.

I can really recommend reading the entire article. The part about the neuroscientific studies on the brain regions that are responsible for us detecting dissonance in the data are especially fascinating.

Swedish blog tags:
Technorati tags: ,

1 comment:

Note: Only a member of this blog may post a comment.