This week in mySociety work, we published a blog post about Freedom of Information being good.
There’s a link in that to a good study looking at the connection between corruption and freedom of information, which makes the point that the mixed evidence base is because of overlapping effects. Increasing transparency has an immediate effect of increasing the detection and successful prosecution of corruption, while in the longer term decreasing the probability of corruption. If you look at this wrong though, you get the very counter-intuitive finding (which that paper points out is “in contrast with the most straightforward economic theories of crime”) that more transparency increases corruption.
My general sense is that backfire (when something doesn’t just not work, but has the opposite of the intended effect), is rare, but is such a good story it gets through our guards a bit. It’s certainly gotten through mine before and I wrote a blog post a few years ago about my lost confidence in backfire research around correcting people’s information. Similarly the intuitive counter-intuitive finding that “scared straight” programmes increase crime doesn’t seem to hold up, and the evidence should probably be best described as “no effect” (maybe it sometimes makes things worse, sometimes makes things better). It’s much more likely that something just doesn’t work than manages to do the opposite of what you want.
Taking this too far leaves you blind to “are you sure you’re not making things worse?” questions, but a clearer sense of when counter-intuitive effects are suspect helps make that discussion sharper.
I’m writing something complicated at work, so I didn’t want to open up the new sections too much in my head. Instead I have been slotting in a few paragraphs incorporating things that have been published more recently.
When you write something for years, new stuff comes out. This is in principle good news, because more information makes you more right, either because it nicely fits into your existing argument (validating it), or because it doesn’t and now you can take out the wrong thing and don’t look stupid.
Generally, I’m getting to the point where new publications are not surprising, and nicely fit into the existing argument. Did have a bit of a shock last year though when the new version of the Google Ngram data changed the interpretation, which then needed some thought about some extra ways of validating the approach that are less likely to be upset by future data. The new version of the argument is slightly less sharp, but should be more defendable over time.
One of the fears about getting to the end is locking in major errors. About five years ago I carved out a section and polished it into its own essay for a competition. It only got as far as being long listed, but I’d feel really stupid if it had actually got published because I found a few new sources (one to be fair published later that year) that substantially questioned my core narrative of the historical process I was describing. Writing such a cross-disciplinary book makes it feel especially vulnerable to this.
At some point, you have to stop writing and start being wrong though.