“The most important lesson in all of this is that it’s easy to make bonehead mistakes in statistics … The most important question is “have you actually measured what you think you measured,” because there are all sorts of ways to screw that up. “
Originally shared by Yonatan Zunger
This is one of those news items that hasn’t gotten nearly enough coverage — because it’s the sort of thing that makes professionals go OH YOU HAVE GOT TO BE FUCKING KIDDING ME.
What happened? Back in 2005, the Bureau of Justice Statistics (a branch of the DOJ) did a study on recidivism, and found out that the rate is tremendously high: 68% of state prisoners end up back behind bars within three years of release. Once a criminal, always a criminal, they concluded — and people have been shaping policy to match.
But a team read through it carefully, and it turns out that the BJS made a basic, bonehead, mistake in their statistical analysis. They thought they were measuring whether people who go to prison will reoffend; what they actually measured was that most people in prison, on any given day, are repeat offenders.
Which makes sense, because repeat offenders spend a lot more time in prison than one-time offenders.
These are not the same thing. At all. It turns out that if you do the analysis right, only 30% or so of prisoners will ever re-offend, and only 11% will do so multiple times. In fact, this “once a criminal, always a criminal” rule appears to be completely false — unless, that is, you structure policies so that anyone with a criminal conviction is treated like a permanent criminal, and so not allowed to (say) get virtually any job other than “criminal.” In which case, you will in fact end up with lots of criminals.
In the post linked below, Andreas Schou gives some of the explanation of what went wrong in the study. You can read more at the linked Slate article (http://www.slate.com/articles/news_and_politics/crime/2015/10/why_do_so_many_prisoners_end_up_back_in_prison_a_new_study_says_maybe_they.html), and even more with the paper that actually found the mistake. (http://cad.sagepub.com/content/early/2014/09/26/0011128714549655.abstract)
The most important lesson in all of this is that it’s easy to make bonehead mistakes in statistics. If the statistics matter — if you’re going to use them to prescribe drugs or set public policy or something like that — it’s very important to have people check your work, repeatedly, and ask the right questions. The most important question is “have you actually measured what you think you measured,” because there are all sorts of ways to screw that up.
There’s also a great new book on that subject: Alex Reinhart’s Statistics Done Wrong. (http://www.statisticsdonewrong.com/) Please, if you do statistics in your daily life, read it.