23 April 2011

Unhealthful News 113 - Dishonest time-series analysis (more interesting than it sounds)

It was suggested to me that I write about this story in which the UK Office for National Statistics released an incorrect report that said that the rate of heavy drinking among women had increased by about 20% over a decade.  The problem was that the methodology for measuring how much wine is in "one glass" changed in 2006, to reflect a belated recognition that the average serving of wine had, for a while, been larger than it was being counted as in the statistics.  It turns out that this adjustment – such that someone who says "I drink two glasses of wine per day" is now counted as consuming more total alcohol than someone giving the same answer in 2005 – accounts for more than the entire supposed increase.  That is, the real trend has been downward, but the recorded data shows the inevitable huge artificial jump for women (who drink a much larger portion of their alcohol in the form of wine) at the time of the adjustment.

I was not going to write about this because others covered it very nicely and my first thought was that I had little to add to what had been written (this one is particularly good and is at a great blog about beer, and it links to the two other good ones).  These other bloggers noted in particular that the ONS had previously attached a note to their own statistics reminding users/readers that the adjustment in 2006 needed to be considered when analyzing the data, and that when the new report was aggressively challenged, they appended a retraction of the erroneous claim and apology.  But, the bloggers also note, as far as they could find, none of the press outlets that breathlessly reported the original claim have printed a retraction.

The reason I decided to write about this was that I read about another possible example to watch out for;  I was sure there would be a third if I waited a few more days, but not so far, so I decided to run with the two.  The other example, reported by the CAGE blog, starts with the observation that in 2008 India adjusted their body mass index cutpoint between "normal" and "overweight" down from 25 to 23.  Just to put that in perspective, many Western countries adjusted "normal" down to 25 earlier in the decade, where it remains; even this is so low as to be meaningless, roughly translating into a reasonably muscular man of average height being "overweight" if he is carrying only about six or eight kilograms more fat than someone who would be described as skinny or is at body-sculpting levels of fat.  CAGE predicted in 2008 that this would lead to the claim that there is an increase in obesity in India and are now saying that this has happened.  They did not actually find a smoking-gun report of a time series that ignored the adjustment like the ONS case, but they reported news that hints that people are making that mistake less formally.

Unlike the British cutpoint for women drinking too much alcohol, which at least is just (just!) into the range that is believed to start to be harmful, the BMI cutpoint of 25, let alone 23, is well below the start of the range that has been shown to be unhealthy.  The measure just makes no sense at all.  It was interesting that the Times of India article CAGE linked to in their 2008 post said, "Doctors say these guidelines are the need of the hour since the number of those suffering from obesity and related problems is on a rise," (as if the bizarre definition change would somehow help people who were actually obese) and made various hand-waving statements about Indians being somehow different from other H.sapiens.  While there is no direct connection, this strikes me as strangely similar to the reports of India trying to ban their incredibly popular dip products (oral smokeless tobacco and non-tobacco products) that I have written about over the last few weeks.  It is hard not to get the impression that the elites in India think that because of their unusual history and head count that they can just decide their part of the world works differently than the rest of it.

(Yes, I know, it is pretty rich for an American to be chiding someone else for practicing nationalistic exceptionalism.  The only defense I can offer is that most of those who make claims about American exceptionalism seem not to suggest that the rules of economics or biology affect us differently, but rather stick to claims about exceptionalism in ethics and socio-political matters, which are at least theoretically defensible though are quit dubious in practice.)

Anyway, the take-away points about these statistical adjustments are the following:  Changes in definitions like this occur all the time, and they are not difficult to deal with.  The two examples presented here are utterly trivial to deal with since they are just a change in labeling; someone doing a time series analysis today can just convert pre-change statistics to post-change ones (or vice versa) and report a consistent number.  That is, if someone wants to report the time trend in British drinking that runs through 2006 all they have to do is recalculate the pre-2006 quantities based on the post-2006 definition of "a glass".  If they want to measure the increase in "overweight" in India, it is easy to apply the silly new definition to old BMI measures.  In other cases there is a shock to the data that is not just an arbitrary change in how to label the data, such as when the phrasing of a standard survey question changes in a way that gets radically different answers, often because no one realized the change would matter (e.g., if the "same" survey changes from asking men "did you have gay sex in the last year" to asking "did you had sex (including oral) with a man in the last year", there will be a huge jump that cannot be corrected in the same way as a changed interpretation of the data).  In that case, the standard approach is to put in a variable that is whether an observation came before and after the change, which basically means assuming the time trend is continuous and that the jump that year is an artifact.  The same method can be used for a case like the wine or BMI if you only have the definition (you have the label "overweight" or "normal", but not the actual BMI number), though in that case if you are doing a study and sending out a press release (as ONS did), there is no excuse for not going back and looking at the raw numerical data.

Given how simple this is, and that it is probably taught in the second or maybe even first semester of any decent applied statistics program, there is no excuse for the British study.  This is not something that an even slightly competent researcher could possibly fail to notice in the data even if they somehow overlooked the information about how the definition changed ("hmm, lets' look at the trend from year to year: down a bit, down a bit, same, down a bit, huge increase, down a bit, down a bit – yup, it sure looks like an upward tend to me").  Either someone was intentionally trying to mislead their audience or they were in so far over their heads – and by this I mean they knew absolutely nothing about analyzing statistics, but did so anyway – that they had no excuse for claiming their analysis was worth anything.  Either way, it is important to recognize the difference between honest disagreement (which this obviously was not, since the ONS retracted it), honest mistakes (which this was not because the mistake is too glaring to make honestly), and dishonesty (either in the form of lying about the world or about one's qualifications).

The jury is still out on India about that point.  I will wait to see if I or CAGE can find a case where someone explicitly and quantitatively mis-reports the time trend.  Perhaps, notwithstanding the doubts I have about Indian government wisdom, such a technical error is less likely to happen.  After all, judging from the floods of impressive-seeming Indian applications to graduate school I have seen, there must be approximately one million people who have been educated in health statisticians in India.  (Interesting trivia:  That was at University of Texas.  At the University of Alberta School of Public Health we got very few applications from math-whiz Indians, who seemed to set their standards higher, and instead got floods of applications from unqualified Africans.)

So, though I had little more to add to what others had already written about these, there was a lesson that I will try to keep in mind:  Most newspaper readers, upon seeing the retraction of the ONS claim (if, hypothetically, they saw it), probably could not recognize that this was not some super-complicated mistake that a competent and honest group of researchers might occasionally make.  Therefore, it is important for those of us who recognize the difference – between subtle and possibly honest mistakes and the glaring dishonest ones – to point it out, and to not mince words about it.  Perhaps the credibility of political  factions who traffic in junk science would start to crumble if people could be shown how so many of the "little errors" they made were not mistakes that anyone could honestly make.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.