News

'First Ever' Study Comparing Vaccinated and Unvaccinated Children Shows Harm from Vaccines?

This study, with its suspect statistics and devil-may-care attitude toward methodological design, is a case-study in how to publish a misleading paper with faulty data

Published May 16, 2017

Updated May 19, 2017

In April 2017, anti-vaccine groups seemed to have finally gotten what amounted to the Holy Grail for their cause: an allegedly large-scale, peer-reviewed study showing the links between vaccines and autism among a large population of children. Vaccine skeptic groups, who reject the wide body of scientific literature refuting that link between vaccines and autism, have long sought such a study, but they’ve been hampered by practical concerns, most notably the ethical implications of withholding vaccines from a large group of children.

Released to heavy promotional fanfare on anti-vaccine websites and social media, a 24 April 2017 study published by the Journal of Translational Science claimed to be that Grail. The study (titled "Pilot Comparative Study on the Health of Vaccinated and Unvaccinated 6- to 12-Year Old U.S. Children") neatly solved the problem of withholding vaccines by surveying parents who had already chosen not to vaccinate their children. Using an online survey of 415 mothers of homeschooled children, the study concluded that vaccines can increase the risk of neurological developmental disorders, particularly in cases of preterm birth.

The anti-vaccine website Age of Autism, which also helped raise money for the study, reported its findings in glowing terms:

As parents have long expected, the rate of autism is significantly higher in the vaccinated group, a finding that could shake vaccine safety claims just as the first president who has ever stated a belief in a link between vaccines and autism has taken office.

The only problem? The paper is a identical version of a paper briefly published in Frontiers in Public Health in 2016 before being disowned by the publisher. This Translational Science version, as well, was pulled from the website with some reports that it had been retracted as well.  As of 18 May 2017, however, the study reappeared on that journal's website with no public comment for why it was removed or returned.

As we will describe below, these de facto retractions and high level of scrutiny stem not from a conspiracy to silence work critical of the medical establishment, but from the myriad ethical, methodological, and quantitative problems inherent in the study and to the research group behind it.

The Researchers, Their Funding, and the “Peer-Reviewed” Journals

Though hailed by some blog posts as a truly “independent” research project, the lead author of the study, Anthony Mawson, is far from an impartial player in this debate: he is a vocal supporter of Andrew Wakefield, a controversial doctor who is arguably the the father of the anti-vaccine movement. Wakefield first proposed the connection between vaccination and autism in a 12-child case study built with data that was misrepresented and collected without ethical approval, a work that has since been retracted. At the time of the study, Wakefield was working with a lawyer to create a class-action lawsuit against makers of the measles, mumps, and rubella (MMR) vaccine. Wakefield had also filed a patent for a replacement MMR vaccine that he hoped to develop.

Mawson, who signed a petition called "We Support Andrew Wakefield," alleged in a 2011 lawsuit that he lost an academic post due to his views on vaccine safety. (That lawsuit was dismissed.)

Mawson’s vaccine study was funded by two anti-vaccine groups: Generation Rescue, founded by anti-vaccine activist Jenny McCarthy, and the Children's Medical Safety Research Institute, founded by vaccine skeptic Claire Dwoskin. Web sites such as Age of Autism ran ads calling for donations to Generation Rescue, containing an explicit statement that the money would go toward funding the study.

The journals that published the study are just as a problematic as the donors, though for different reasons: both journals have been accused of predatory practices. These types of journals profit from academia's relentless focus on publication by charging large publishing fees in lieu of editorial oversight. Translational Science charged Mawson et al $2,000 to publish their study.

Additionally, both journals' commitment to the peer-review process is questionable at best. For the study’s 2016 incarnation, Frontiers in Public Health asked Linda Mullin Elkins, a chiropractor with no published research to her name on the subject of autism, or any background in vaccine research or epidemiology, to review the study. This same journal, just four months prior, retracted a paper about "chemtrails", a popular topic in conspiracy circles alleging government-hidden harm from chemical trails made by planes in the sky.

While there is less of a publication history by which to judge the Journal of Translational Science — its first issue was released in July 2015 it is not widely cited and is not indexed on the National Institute of Health's MEDLINE database, a repository for abstracts of medically relevant research.

Problems With Their Data Collection

But even if the funding sources and authors' histories show bias, they still could have designed a careful study with high-quality research, right? In theory, yes. But in practice, the study design was riddled with flaws.

On its face, this was a fairly simple survey study in which the authors, through a partnership with a homeschooling network, distributed an online survey link to parents asking a series of demographic and medical history questions about their children, as the study's authors describe:

A number of homeschool mothers volunteered to assist [the National Home Education Research Institute, NHERI] promote the study to their wide circles of homeschool contacts. A number of nationwide organizations also agreed to promote the study in the designated states. The online survey remained open for three months in the summer of 2012. Financial incentives to complete the survey were neither available nor offered.

Right off the bat, the pool of participants in the study differs demographically from the U.S. population as a whole. Parents who homeschool are less likely to view vaccines as safe than parents of children in public schools. Homeschooled children, as a whole, are less likely to make annual trips to the doctor (and, based on the study's own data, those whose parents both homeschool and don't vaccinate have even fewer doctor visits).

Those infrequent trips to the doctor make parents less likely to report medical issues in an online survey — because they may simply not know that the issues are there. Vaccinated children are more likely to be diagnosed with health conditions, but that does not mean that they necessarily have more health issues.

The authors of the study acknowledged that their sample was not intended to be representative of the broader U.S. population:

The object of our pilot study was not to obtain a representative sample of homeschool children but a convenience sample of unvaccinated children of sufficient size to test for significant differences in outcomes between the groups.

But even if the study’s authors weren't aiming to build a representative pool of participants, they should at least have built in a measure to make sure no further bias was introduced based on those participants' medical views. This would have been crucial in light of the fact that the nature, and desired finding of the study was prominently advertised and explained on multiple anti-vaccine websites.

The study's authors also made no effort to track who received the survey or what percentage of people from different demographic or ideological groups ignored calls to participate (a research metric known as the response rate). This raises the possibility that people who wanted to see a study on vaccine safety were more likely to respond to the survey:

Initial contacts were made in June 2012. NHERI contacted the leaders of each statewide organization by email to request their support. A second email was then sent, explaining the study purpose and background, which the leaders were asked to forward to their members.

A link was provided to an online questionnaire in which no personally identifying information was requested. With funding limited to twelve months, we sought to obtain as many responses as possible, contacting families only indirectly through homeschool organizations.

In short, the study's design was flawed from the start.

Problems With The Study's Statistical Analyses

The authors then dressed up their flawed, potentially biased data set in the cheap Halloween costume of a statistically responsible study. No amount of statistics, however, will get around the fact that there were not enough people in the study to address the questions they claim to have investigated.

Only 47 children in the 666-person study had what the authors defined as a neurological developmental disorders (also called an NDD), an already broadly-defined category that includes multiple conditions. Despite this, viral blog posts reporting on this study repeatedly state that autism risk, specifically, was higher in the vaccinated population. What those headlines fail to convey is that such a claim rests entirely on the nine people in the study who were actually diagnosed with autism.

The grossly insufficient sample size is most apparent in the lack of precision in the final results. The authors use something called an "odds ratio" to present their analysis — a measure that compares the relative odds of two populations with differing medical histories (i.e. vaccinated versus unvaccinated) developing a certain medical condition (i.e. NDDs). When scientists use odds ratios, they give a measure of how precise the odds are — a range of uncertainty that can be thought of as, essentially, as a margin of error.

In the Mawson study, they claim (for example) that a preterm birth combined with a vaccination caused a 6.6-fold increase in the odds of a neurological condition or learning disability. This result, a remarkably large odds ratio, comes with a laughable amount of precision — a range from 2.8 to 15.5. That wide a range of possibility is a big red flag, and is representative of many of the associations documented in the study.

The reason there is a range of possible outcomes is because odds ratios are generally pushed through statistical models meant to untangle the competing influences of other factors in their data, isolating the specific effect of one risk factor on an outcome. This is a crucial step for Mawson et al, because a significant number of children included in the NDD pool were also born prematurely, which multiple studies have already robustly linked to NDDs.

The competing influence of preterm birth on NDDs is arguably the largest methodological hurdle for the researchers to get around. The authors needed to make crystal clear that the association between NDDs and vaccination holds without the influence of preterm birth, or else the study is meaningless. They claim to have done so, but provide next to no documentation on the specifics of that process.

Without explaining what they did exactly, they simply write that they used two statistical models to adjust for preterm birth and that the "second adjustment" showed that there was a) no relationship between preterm birth and NDDs and b) there remained a relationship between vaccination and NDDs.

That is quite a statement. The researchers' claim that there is no association between preterm birth and NDDs flies in the face of long-established science linking the two. Anytime research that contradicts long-understood and scientifically documented relationships should be read with a critical eye toward the methods used. But that is almost impossible here, as the only documentation the authors offer of their methods is to say they carried out a variety of regressions using a statistical software program called SAS.

What makes this worse is that the authors did not explicitly offer a hypothesis or any statistical criteria prior to beginning the study. That makes their data vulnerable to confirmation bias, as it allows them to potentially slice and dice data in as many ways as they want and plug those data into a program to see if their desired results that pops out. This study’s small sample size undoubtedly compounds this possibility, as it increases the likelihood for chance associations that may be flashy, but are ultimately not reproducible.

Sources

Mawson, Anthony, R., et al.   "Pilot Comparative Study on the Health of Vaccinated and Unvaccinated 6- to 12-Year Old U.s. Children."     Journal of Translational Science.   24 April 2017.

Wessel, Lindzi.   “Four Vaccine Myths and Where They Came From."     Science.   27 April 2017.

Handley, J.B.   “Unvaccinated Children and Autism: Study Funding Needed Right Now."     Age of Autism.   28 November 2012.

Blaxill, Mark.   “Pilot Comparative Study on the Health of Vaccinated and Unvaccinated 6-12 Year Old US Children."     Age of Autism.   4 May 2017.

Dachel, Anne.   “The Time is Now for a Vaccinated/Unvaccinated Study."     Age of Autism.   23 February 2017.

Orac.   “Antivaccinationists Promote a Bogus Internet “survey.” Hilarity Ensues as It’s Retracted."     ScienceBlogs: Respectful Insolence.   29 November 2016.

Kennedy, Allison, M., and Gust, Deborah, A.   “Parental Vaccine Beliefs and Child's School Type."     Journal of School Health.   11 August 2005.

Cordner, Alissa.   “The Health Care Access and Utilization of Homeschooled Children in the United States."     Social Science and Medicine.   July 2012.

Batts, Vicki.   “Vaccine Study: Peer-Reviewed Study Shows Vaccinated Children Have a 700% Higher Chance of Neurodevelopmental Disorder."     Natural News.   7 March 2017.

Cook, Colleen, et al.   “A Meta-Analysis of Response Rates in Web- or Internet-Based Surveys."     Educational and Psychological Measurement.   1 December 2000.

Szumilas, Magdalena.   “Explaining Odds Ratios."     J Can Acad Child Adolesc Psychiatry.   August 2010.

Wood, Nicholas, S., et al.   “Neurologic and Developmental Disability after Extremely Preterm Birth."     The New England Journal of Medicine.   10 August 2000.

Marlow, Neil, et al.   “Neurologic and Developmental Disability at Six Years of Age after Extremely Preterm Birth."     The New England Journal of Medicine.   6 January 2005.

Moster, Dag, et al.   “Long-Term Medical and Social Consequences of Preterm Birth."     The New England Journal of Medicine.   17 July 2008.

Sterne, Jonathan, A.C., and Smith, George Davey.   “Sifting the Evidence—What’s Wrong With Significance Tests?"     BMJ.   27 January 2001.

Nuzzo, Regina.   “Scientific Method: Statistical Errors."     Nature.   14 February 2014.

Lenth, Russell, V.   “Some Practical Guidelines for Effective Sample Size Determination."     The American Statistician.   2001.

Deer, Brian.   “How the Case Against the MMR Vaccine Was Fixed."     BMJ.   6 January 2011.

Deer, Brian.   “Exposed: Andrew Wakefield and the MMR-Autism Fraud."     briandeer.com/em>. Accessed: 11 May 2017.

Deer, Brian.   “Revealed: MMR Research Scandal"     The Sunday Times (UK). Accessed: 22 February 2004.

Trafecante, Kate.   “The Money Behind the Vaccine Skeptics."     CNN Money.6 February 2015.

McCook, Alison.   “Vaccine-Autism Study Retracted — Again"     Retraction Watch. 8 May 2017.

Chawla, Dalmeet Singh .   “Author Loses 2nd Paper on Supposed Dangers of Chemtrails."     Retraction Watch. 18 July 2016.

Updates

Updated to reflect the fact that the Translational Science version of the Mawson study has been republished on their website as of 18 May 2017.

An earlier version of this article erroneously suggested that the Journal of Translational Science was not "listed" on PubMed. JTS is not indexed for MEDLINE, however "selected citations" are added to PubMed through what they describe as "special collaborative agreements" with outside organizations.

Alex Kasprak is an investigative journalist and science writer reporting on scientific misinformation, online fraud, and financial crime.