ABSTRACT

Over the last quarter century, the prevalence of autism in the US has increased from one in 2,500 in 1989 to one in 68 in 2010 (Baio 2012). Explanations for the autism ‘epidemic’ either suggest that there was a real increase in the number of cases caused by environmental toxicity (Roberts et al. 2007), or that it has resulted from a broadening of diagnostic criteria, along with increased public awareness and a greater availability of services (Croen et al. 2002; Fombonne 1999; Grinker 2007). These explanations, however, fail to hold up under even the most cursory scrutiny. Given that the diagnostic criteria for autism were broadened just before the epidemic began – first in 1987, with the revised, third edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-III-R) (American Psychiatric Association 1987) and then in 1994, with the fourth edition of the DSM (DSM-IV) (American Psychiatric Association 1994) – an explanation in terms of increased levels of environmental toxicity seems not only less plausible, but also untestable since there is no biomarker for autism that would allow comparison across historical periods (Goldani et al. 2014). At the same time, an explanation in terms of broadened diagnostic criteria is equally unsatisfactory. It begs the obvious question: why were diagnostic criteria broadened? Was it due to better scientific understanding of autism, or was it due to ‘medicalisation’? Clearly, this question resurrects the original debate about the causes for the epidemic and merely leads us down a rabbit hole of infinite fractalisation (Abbott 2001).