There’s a big debate in science about research and what to do with negative results. It boils down to more or less this:
We scientists publish only positive results. There’s a lot of reasons for this, some more valid than others. The first one is that it’s hard to prove things beyond a shadow of a doubt (Popper, stop spinning in your grave). Formally, you can disprove things. If we ever witnessed, and recorded, an apple that failed to fall from a tree and just hovered there, we’d have a tremendous challenge to our understanding of gravity.
We can accumulate evidence for things, and a lot of what we do is precisely that. But we can’t prove things as a mathematician can.
This leads to strangeness when science meets the real world. Witness the debate about whether evolution is “just a theory,” which is entirely predicated on a profound misunderstanding on how science works.
When you run an experiment and you don’t get results there are many things that can explain that. Perhaps your data was bad. Perhaps one of your assumptions was wrong. Perhaps you were unaware that you were making assumptions in the first place.
Say that, for some reason, I run a double-blind, randomly-controlled, properly-powered (i.e. with enough samples), long-enough, well-sampled (lots of brands) experiment to see if people who drink tap water get less cancer than people who drink bottled water. I have no idea why this could be the case, since I just made it up, but bear with me. Now imagine that I discover that the rate of cancer among people who drink tap water is 25% of the rate of cancer among people who drink bottled water, p<0.00001 and all that. It survives peer review and gets published in a major journal. The media will probably be all over it.
You won’t care why, and you won’t care how, but you’re damn sure switching the people you care about to drinking tap water from now on.
Eventually, someone will discover which ingredient in bottles is dissolving in the water, and how it affects human DNA, and so on. There will be entire scientific careers, many papers, and perhaps a few conferences on why and how this happens. But the fact itself that tap water is safer (or that bottled water caused cancer) is important, right here, right now.
Now picture the much more likely scenario in which I perform the same experiment, and find that absolutely nothing happens. What does it mean? Does it mean that bottled water is just as safe, cancer-wise, as tap water? Maybe. Is this unexpected? No. Most sane people fully expect tap water and bottled water to be just as safe as each other. The experiment, and its result, add very little in the way of new information to the world. This makes them less valuable.
There’s also the problem that I could’ve simply missed something. Perhaps my sample missed a brand with a radioactive bottle (Nuka-Cola?) and no one noticed, because the team never heard of it. When you get a positive result, there’s something there to dissect, analyze, and learn from. When you get a negative result, something isn’t there. You failed to catch it. What can be learned from it, at least as a first approximation, is that the same steps are unlikely to catch that thing.
This is why cryptozoologists persist. They can’t be proven wrong. You can’t show them that there is no Sasquatch. Most people, given enough negative evidence, will generalize that a statement is false and move on. Most of us believe there is no Sasquatch. We believe that there is no Sasquatch because people try and fail to capture a Sasquatch over, and over, and over.
Those failed attempts to capture Sasquatchii, then, have a little bit of value to everyone. This is why the scientific community would like negative results published: because we can still learn from them. Some more (great, obvious ideas that should work but don’t), and some less (stuff that ends published in the Annals of Improbable Research).
Even if there are errors, and omissions, and mistakes, we should still learn from our work. But who’s going to curate, collect, peer-review, and publish all that? Writing stuff up for publication is a lot of work, and reviewing is thankless, unpaid, and a lot of work as well. But here’s the thing: we self-censor everyday negative results. We don’t send them out for publication, we don’t bother too much with them.
We might as well just put them on the Internet, and let people quote and cite and learn from, or ignore, them as they will.
What if there’s a great idea there, and someone takes it from me? Well, I hope that humanity benefits, for one. I hope that they credit me, if they took my idea. And I hope I’m not so conceited as to think that I have many great ideas. A few good ones, I hope.
So I’ll start posting stuff I do here. It will be (hopefully) interesting, (perhaps) thought-provoking, (potentially) flawed, and unfit for regular academic publication. At least for the time being. If you like it, let me know. If you use it, please credit me – I am, after all, an academic, and need credit. If you improve it, or fix it, I’d love to know about it. If you don’t care about it, ignore me.
One reply on “Research and negative results”
[…] Fantastic post over at Slate Star Codex on how people use statistics to cheat at science. I already gave my take on the pressures scientists face, and the culture that leads to it. […]