(Update: the next post in the "junk DNA" series will be up on Friday. Thanks to all of you who pitched in suggesting new topics, asking questions, and proposing guest blogs. We have a rich schedule coming up!)
Here's the checklist for a scientific paper:
- you come up with a hypothesis;
- you design an experiment to test the hypothesis;
- you gather the data;
- you look at the data and decide whether or not your original hypothesis was correct.
Oh, yeah. We forgot the analyst! Well, you're in luck, because that's exactly my job.
After they gather the data, the experimentalists show it to us, jumping up and down in excitement: "Look what we found!"
And we, the analysts, raise a brow, click our tongue, and reply: "Yeah, but can we prove it?"
So we design a new statistic, we write a code to implement it, we run, graph and debug until we've proven what the experimentalists saw in the first place. Or the opposite of that, it can go either way. Because the truth is, the human eye naturally looks for patterns. It's not objective. What you "see" is not always real. A good eye can help you make a conjecture, but then you have to prove your hypothesis. If you can't prove it, it's not real.
That's the core of scientific thinking.
So the other day we got the response on a paper we submitted for publication a few months ago. We had some data, which we summarized with a set of nice graphs, and then did some statistical analysis to prove the assert. The response? Rejected.
Turns out, the reviewer looked at our analysis, acknowledged the highly significant p-value (just so you know, a "highly significant p-value" means we proved our hypothesis), then stared at the data. He stared, stared, and stared and just couldn't see it. So he wrote: "The data doesn't pass the eye test."
May we suggest an eye doctor, kind Sir?
Picture: Ogunquit Beach, ME. Canon 40D, focal length 60mm, exposure time 1/800.