Ten statistical commandments

Thou shalt not abuse stats!
20 December 2019

Interview with

Tamar Makin, UCL

STATISTICS-GRAPHS

Statistics and graphs

Share

How do you make a statistician shudder? The answer is to read a few neuroscience papers, where the abuse of statistics appears to be rampant in some quarters and was enough to motivate eLife editor and UCL neuroscientist Tamar Makin to draw up 10 “thou shalt not” commandments; as she explains to Chris Smith, her manuscript turned into one of the most popular eLife papers yet…

Tamar - I was in the Annual Society for Neuroscience meeting, and I was interested in a poster showing how they can use this really cool new technique called optogenetics to change the way the brain is organised. What they do is they measure the neuronal responses in a given brain area. Then they manipulate the brain area with this technique and then they record the neurons again. To quantify the differences, they first identify the neurons that are most responsive in a certain way and they go back to the same neurones after this manipulation. And, lo and behold, they find that the neurons that were originally found to be very selective are now responding much less; whereas the neurons that didn't really respond are now relatively more responsive. So they thought they have a really fascinating example of changing how the brain responds and how it is organised. But, in fact, what they found is that if you pre-select your neurons or your samples based on a specific characteristic, this characteristic is going to be expressed less if you repeat the measurement. So basically with optogenetic, they found a very simple statistical artifact.

Chris - Basically they had to use optogenetics to discover regression to the mean!

Tamar - Exactly!

Chris - Now, tell us a bit about you Tamar, and how you come to be doing this.

Tamar - I'm a neuroscientist. I'm also an editor in eLife and I go through a lot of manuscripts both for my own education. And as part of my role, I go through many papers with my lab as part of my students' interest and training in our weekly journal club. You know, we see these problems that are just recurrent and repeated and it can get very frustrating, especially when there's a potential clinical impact to how the discovery is being interpreted. And we were, you know, going over a particularly bad paper in a particularly good journal one day at journal club. And I got so exasperated, I said, okay, let's make a list of 10 most simple rules that everyone needs to follow when they're writing a manuscript. And we came up with this, we call it the 10 commandments, thou shall not. And I was surprised to see how useful it was for my students so that whenever we would go over a paper in journal club, we would go over this list of 10 they shall not, and see if they've been violated. And seeing how well this was taken by my group, I thought, you know, maybe other people would also find it useful. What motivated me most is the idea that we can be constructive about it so we don't just have to tell people what not to do. We could also tell them what to.

Chris - Is there also a sort of deeper rooted issue here which is that it's a bit of a blind leading blind situation because what we've got are reviewers on some of these papers who are equally likely to make the same mistakes that the people writing the papers are making because no one's saying anyone setting out to deceive here, rather they're just making statistical mistakes because they haven't been taught how to do statistics because at the end of the day, stats is a very specialised thing that you need a statistician to help with?

Tamar - Absolutely. I think at the end of the day, this is certainly the responsibility of the authors, but the utmost final responsibility is for the community. Do we accept evidence that has been misinterpreted? Do we accept evidence that hasn't been carefully interrogated statistically.

Chris - if we know why we're in this position, we know how to fix it, don't we? So this is two questions really rolled into one, why is it happening and therefore what do we have to do so that the next generation of neuroscientists who send you papers don't make you go 'tch'?

玛,我认为,对于普通学生,statistics is kind of dull. Even if they do take comprehensive statistical lessons, this information is bound to evaporate. So I think the question for us is how do we maintain this standard of statistical interpretation is a lively discussion. And when we were writing this manuscript, we wanted to launch a discussion and a conversation. And I think here social media potentially offers an existing tool to try and maintain these conversations.

Chris - So it really comes down to still stats has an image problem, doesn't it? I mean this happened to me when, when I was at medical school, I remember someone declared, not epidemiology, but "epidemi-holiday" lectures and they just took the week off! Surely that's the issue, isn't it? If we can make the subject have a better image, then more people will get into it. Teach it better and then this problem will go away.

Tamar - Yes, absolutely. But we should also insist on a higher standard of how science is carried and how science is interpreted, because at the end of the day, what's the point of pouring so much money and training and effort? If we are able to correctly infer or interpret what the results are actually saying. I think every student, every PI, every community really needs to insist that this basic training and understanding of what the results are, it needs to be maintained at a higher level.

Comments

Add a comment