Yesterday I received a response to my inquiries concerning the pilot data being used by the MoJ to support the extension of Payment by Results. I have reprinted the letter I received in the blog post below.
This blog contains some commentary on the replies I received.
- The Moj used Section 22 of the Freedom of Information Act to not answer two of my questions. They say that the information I was seeking is about to be published on July 25. I am prepared to wait until then before deciding to appeal their reply or not.
- I asked why they published the results early, they said they did this “to ensure the information was made public as soon as it was available” despite admitting in other answers that it was incomplete. I suspect they wanted to get in some ‘good news’ before the summer vacation and in advance of CSR negotiations. But in my view, it looks shoddy and is evidence of using data and stats for political purposes. However, I guess all governments do that… don’t they?
- Their answers do seem to assert that they have sought to compare like with like in terms of cohort comparisons.
- Their answer to question 4 does evidence the fact that this data is incomplete and premature, in my view
- They originally said that a key difference between the cohort is that in this group “reconvictions only count offences for which the offender was convicted at court, whereas the National Statistics proven re-offending measure also includes out of court disposals (cautions)”. I asked what the impact of that difference was likely to be. They referred me to “Table B3 of annex B from the MoJ’s proven re-offending statistics quarterly bulletin:https://www.gov.uk/government/publications/proven-re-offending-2“. I have looked at this table and it is not entirely clear so I think I am going to have go back to them on this and seek further clarification. But do note that they said “We have not produced alternative interim figures on what the impact would be if different rules (such as including cautions) had applied”. Which seems a bit sloppy to me. This is a critical difference after all and I suspect that if the data was showing not in favour of the pilot providers, they would be seeking further clarification!
- I asked whether the comparison groups (to evidence that the pilot intervention was in fact working) were selected using some kind of randomised selection. They said “The control group will be selected by an Independent Assessor using Propensity Score Matching (PSM), the methodology for which has been published at: Peterborough Social Impact Bond: an independent … - Gov.uk“ So the answer is ‘NO’: comparator groups will be selected by ‘independent’ assessor (being paid by the government, I assume). I looked the reference document and here is a quote from it: “It should be noted that, unlike random control allocation, PSM cannot take account of unmeasured differences which may account for variation in reconviction aside from ‘treatment received’” Uh huh. But it goes onto assert that: “However, PSM [propensity score matching] is widely regarded as one of the best ways of matching quasi-experimentally (Rosenbaum, 2002), and it has been increasingly used in a criminological context (e.g. Wermink et al., 2010).” So that is alright then. Excuse me while I give you this new medicine that has been quasi-experimentally tested on people who are sort of similar to you…
- I asked “For Doncaster, success “will be determined by comparison with the reconviction rate in the baseline year of 2009”. How will this accommodate national and/or local trends in (say) sentencing practice or levels of crime?”. They replied “The five percentage point reduction target was agreed after analysis of historic reconviction rates established that this would illustrate a demonstrable difference which could be attributed to the new system and not just natural variation.” That is not an answer to my question, I will need to go back to them on this.
- I asked about the 6 versus 12 month comparison and how the headline data (based on six months) was going to look against the usual (12 month) data. They said in reply “The statistical notice made clear the limitations of the information presented and the care that should be taken in interpreting these interim figures.” Remind me - was that subtlety in the press releases that went out when this interim data was released…?
- They missed the point completely on my question about seasonality…
- Please read their answer to my question about why 19 month data. Please let me know what you think. I am thinking ‘wool’, ‘eyes’ and ‘what do you really mean?!’
- The maths question is funny. They said “The figures presented were the rounded versions of the actual figures, which were 68.53 and 79.29″. So I have done the calculations again and the result I get this time is 15.7. So they are sort of correct - but why leave out the first decimal point?
- I asked about statistical significance (the test of whether a difference is just a chance difference or one that indicates a real effect is in play). This is what they said “We have not carried out statistical significance tests on the interim figures because, when it comes to the final results, neither pilot will be assessed on the basis of whether they have achieved a statistically significant change.”
OK. Let me repeat that again in big and bold:
We have not carried out statistical significance tests on the interim figures because, when it comes to the final results, neither pilot will be assessed on the basis of whether they have achieved a statistically significant change.
So, Payment by Results could well be based upon purely random chance events that may just have happened.
Is that a solid basis for the distribution of taxpayers’ money?
Courtesy of Jon Harvey at A Just Future: Fair for All
Comments
No responses to “Tilting at lamposts”