Studies have proved that having an abortion increases a woman's risk of developing breast cancer.
The claim that undergoing an abortion increases a woman’s chances of contracting breast cancer, often cited by anti-abortion activists, arguably has two origins: a scientific one and a political one. Its scientific origins have their roots in the 1980s and 1990s, with a series of studies investigating a link between induced abortions and an increased risk of breast cancer. Those early studies were ambiguous — no matter what side of the abortion issue one was on, one could (and still can) find a study to support their preferred narrative.
From a scientific standpoint, the issue is no longer ambiguous. After an authoritative 1997 study utilizing government-collected data on every Danish women born between 1935 and 1978 concluded there was no increased risk of breast cancer from abortions, organizations such as the World Health Organization, U.S. National Cancer Institute, American Cancer Society, and many others now reject the existence of any such link. The conclusions drawn from that study, as published in the New England Journal of Medicine (NEJM), were that:
Overall, the risk of breast cancer in women with a history of induced abortion was not different from that in women without such a history, after potential confounding by age, parity, age at delivery of a first child, and calendar period was taken into account.
From a political standpoint, the issue saw a resurgence in attention under the socially conservative administration of George W. Bush, which changed the language on an National Cancer Institute fact sheet regarding breast cancer and abortions against the wishes of that organization’s scientists. This issue was discussed, later, in an August 2003 US Congressional report from the Committee on Oversight and Government Reform that investigated changes to science policy under the Bush administration:
Until the summer of 2002, the National Cancer Institute posted an analysis on its web site concluding that the current body of scientific evidence does not support the claim that abortions increase a woman’s risk of breast cancer. The analysis explained that after some uncertainty before the mid-1990s, this issue had been resolved by several well-designed studies, the largest of which was published in the New England Journal of Medicine in 1997, finding no link between abortion and breast cancer risk.
In November 2002, however, the Bush Administration removed this analysis and posted new information about abortion and breast cancer on the NCI web site. […]
This new fact sheet erroneously suggested that whether abortion caused breast cancer was an open question with studies of equal weight supporting both sides. The New York Times called the NCI’s new statement “an egregious distortion of the evidence.” According to the director of epidemiology research for the American Cancer Society, “This issue has been resolved scientifically . . . . This is essentially a political debate.”
After members of Congress protested the change, NCI convened a three-day conference of experts on abortion and breast cancer. Participants reviewed all existing population-based, clinical, and animal data available, and concluded that “[i]nduced abortion is not associated with an increase in breast cancer risk,” ranking this conclusion as “well-established.” On March 21, 2003, the NCI web site was updated to reflect this conclusion.
The notion that reproductive choices affect breast cancer risk is not new, nor is it controversial. It is widely accepted, for example, that carrying a pregnancy to term reduces one’s risk of developing breast cancer later in life. This issue is often conflated with the question of abortions increasing a woman’s risk, because since carrying a pregnancy to term may reduce risk of breast cancer, having an abortion may offset that reduced risk. That is not the question at hand, however: the question, strictly defined, is ‘will having an abortion make you more likely to get breast cancer when compared to women who never give birth?’
The scientific basis for the breast cancer-abortion link has its roots, most notably, in a 1980 paper in the American Journal of Pathology that investigated one potential mechanism proposed for relationships between abortion and breast cancer: that leftover undifferentiated cells in a woman’s breast (which occur as part of hormonal changes that help to prepare a woman for breastfeeding and appear during pregnancy, and which would become fully differentiated by the end of a pregnancy) are more susceptible to cancer.
To test this notion, researchers intentionally injected laboratory rats with a carcinogen after interrupted pregnancies, and then compared these results to a series of rats who were never pregnant, and to rats that carried pregnancies to full term, to see if interrupted pregnancies were related to an increased susceptibility to developing cancerous cells.
The researchers demonstrated that the likelihood of developing cancerous cells was higher in those rats with interrupted pregnancies. The researchers suggested these results were consistent with the idea that the interruption of a pregnancy leaves undifferentiated cells in a woman’s breast, and that these cells are more susceptible to cancer:
We show here that, in order to be protective, the development of the gland must be complete. Pregnant or lactating rats treated with chemical carcinogens respond with a significant reduction in mammary tumor incidence, while pregnancy interruption gives no protection at all. This is due to the fact that in the mammary gland of animals in which pregnancy has been interrupted, the glands contain some areas with completely differentiated structures and others in which undifferentiated structures prevailed.
Since then, a goal of many researchers has been to design studies to see if this mechanism is valid and if it may be relevant in humans as well. For obvious reasons, a carcinogen injection kind of experiment on humans is out of the question, so scientists have to settle for observational studies. The different ways in which one can go about designing such studies have created a cornucopia of results with differing levels of veracity that anyone can point to, in isolation, to support their narrative.
In terms of research on humans, there are two kinds of observational studies that would be used: case-control studies and cohort studies. In general, case control studies are smaller, and the way in which their control group is identified has the potential to be more subjective or error prone. The National Cancer Institute defines both as follows:
[A case-control study is] a study that compares two groups of people: those with the disease or condition under study (cases) and a very similar group of people who do not have the disease or condition (controls). Researchers study the medical and lifestyle histories of the people in each group to learn what factors may be associated with the disease or condition. For example, one group may have been exposed to a particular substance that the other was not. Also called retrospective study.
[A cohort study is] a research study that compares a particular outcome (such as lung cancer) in groups of individuals who are alike in many ways but differ by a certain characteristic (for example, female nurses who smoke compared with those who do not smoke).
The findings from case-control studies — in which women were asked their abortion history after they were diagnosed with breast cancer — have been especially difficult to interpret. For, women who have had an induced abortion are known to under-report such events, but they might be more likely to disclose this information than they would otherwise have been if they had been diagnosed with breast cancer and knew that they were taking part in a research project investigating the causes of their disease. […]
For example, among women in a Swedish case-control study who had, in fact, had a previous induced abortion recorded on a national abortion register, 21% of those with breast cancer and 27% of those without the disease reported incorrectly that they had never had an induced abortion. Any such systematic differences between women with and without breast cancer in the under-reporting of past induced abortions could appreciably distort the results from studies with retrospectively recorded information on abortion […]
An illustration of the quantitative effects such misreporting — sometimes referred to as a “recall” bias or reporting bias — could have on final results can be found in a 1996 study whose target population, by design, included both a conservative Catholic region and more socially liberal areas.
Though this small, case-control study did show a correlation between abortions and breast cancer risk when comparing populations of women who have never given birth, the study’s more significant contribution was demonstrating the effect of recall bias on this kind of study:
The association between induced abortion and breast cancer was stronger in the southeastern regions of the country, where there is a predominantly Roman Catholic population, suggesting reporting bias. Support for reporting bias as an explanation for regional differences was also found in data supplied by study participants and their physicians on the use of oral contraception. The authors conclude that reporting bias is a real problem in case-control studies of induced abortion and breast cancer risk if study findings are based solely upon information from study subjects.
The 1997 NEJM study was a cohort study and is generally cited as the authoritative study on the alleged risk of abortion. It avoided reporting bias entirely by not conducting interviews at all, instead relying only on Danish data collected by the Danish government:
In our study, all the information on dates and the number of induced abortions, reproductive history, and cancer diagnosis was obtained from national registries, which are compiled through a system of mandatory reporting for the entire population. Follow-up included complete information on death and emigration and was performed through computerized linkage of registry information by means of individually identifiable registration numbers. These measures, we believe, allowed us to avoid some of the major methodologic problems of previous studies.
That doesn’t mean you won’t find studies or individuals still suggesting a link between abortion and breast cancer. The non-profit Breast Cancer Prevention Institute (BCPI), founded by Joel Brind (one of the main scientists promoting the abortion and breast cancer connection), an organization whose sole purpose appears to be “educating” people about said connection, presents a list of studies that they say support their argument. This list is heavily referenced on anti-abortion websites and can commonly be found shared with the claim that “73 Studies Have Examined Abortion and Breast Cancer, 53 Show Higher Risk” or similar.
BCPI’s is an extremely misleading representation of that information, however. Included among those (presently) 60 studies showing a “higher risk” of breast cancer from induced abortions, 23 were self-described on that organization’s list as being not statistically significant. Suggesting that a statistically insignificant correlation supports any argument is, in a word, deceptive.
That leaves 37 studies with statistically significant positive correlations: a ragtag collection of studies as old as 1957 and as recent as 2013, all of which were case-control or meta-analyses that relied primarily on case-controlled studies, and many of which were designed to test questions other than the abortion-breast cancer connection itself.
One of the listed studies, discussed above, aimed to demonstrate (successfully) that case-control studies are a poor design to analyze a purported connection between abortions and breast cancer.
Two of the listed studies included on the list were not, strictly speaking, studies, as they were conference presentations that received little to no peer review (i.e. Laing et al 1994, Bu et al 1995). Many of the studies on the list (e.g., Segi et al 1957, Daling et al 1994, Daling et al 1996) cautioned that while they may have demonstrated correlations, their data were not sufficient to establish causation or even to represent actual trends.
Four of the studies on the list reached conclusions explicitly counter to the claim that they demonstrate an increased risk of breast in populations of women who have had abortions:
Rosenberg et al. 1998: “These data suggest that the risk of breast cancer is not materially affected by abortion, regardless of whether it occurs before or after the first term birth.”
Lipworth et al. 1995: “The risk for breast cancer was not increased for women who had a history of abortion, compared to nulliparous women [women who never gave birth] with no history of abortion.”
Talamini et al. 1996: “No trend in risk was evident for induced abortions.”
Tavani et al. 1996: “Our results indicate a lack of association between induced and spontaneous abortions and breast cancer risk.”
Arguably the most notable publication on this list, both in scope and in citations, is a 1996 paper authored by Brind himself (who created this list in the first place), a meta-review of 28 studies (many of which are also featured individually on the list). His own study concluded:
The results support the inclusion of induced abortion among significant independent risk factors for breast cancer, regardless of parity or timing of abortion relative to the first term pregnancy. Although the increase in risk was relatively low, the high incidence of both breast cancer and induced abortion suggest a substantial impact of thousands of excess cases per year currently, and a potentially much greater impact in the next century, as the first cohort of women exposed to legal induced abortion continues to age.
Typically, meta-reviews have stated inclusion standards delineating what studies are of high enough quality to be analyzed. In Brind’s study, however, “no quality criteria were imposed, but a narrative review of all included studies is presented for the reader’s use in assessing the quality of individual studies.” Numerous scientists have serious issues with it on this basis. A team of Harvard epidemiologists used Brind’s methods in their own study, concluding that the causation he so confidently claimed the could not be made with the data he used.
Mads Melbye, author of the 1997 Danish cohort study, took issue with Brind’s study as well, as covered in a history of the topic:
Criticizing Brind directly, Melbye pointed out that he had relied almost entirely on case-control studies and had based his results on “a crude analysis of published odds ratios and relative risks with no attempt to incorporate the original raw data into a more sophisticated statistical analysis”
Our classification of “false” acknowledges that some scientists and studies suggest a link between abortion and breast cancer, but that suggestion is rooted in the fact that the methodologies utilized in the studies supporting such a link are widely accepted as flawed by the majority of the scientific community and by the fact that large cohort studies, better suited to test this question in the first place, suggest that no link exists.
Observational studies of any kind will always come with limitations and wiggle room for politically motivated players to exploit, as was the case 2002. As a whole, however, we regard use of these data to make a causal link between breast cancer and abortions, without any discussion of their significant caveats involved, intentionally misleading enough to be disqualifying.
A Word to Our Loyal Readers
Support Snopes and make a difference for readers everywhere.
- David Mikkelson
- Doreen Marchionni
- David Emery
- Bond Huberman
- Jordan Liles
- Alex Kasprak
- Dan Evon
- Dan MacGuill
- Bethania Palma
- Liz Donaldson
- Vinny Green
- Ryan Miller
- Chris Reilly
- Chad Ort
- Elyssa Young
Most Snopes assignments begin when readers ask us, “Is this true?” Those tips launch our fact-checkers on sprints across a vast range of political, scientific, legal, historical, and visual information. We investigate as thoroughly and quickly as possible and relay what we learn. Then another question arrives, and the race starts again.
We do this work every day at no cost to you, but it is far from free to produce, and we cannot afford to slow down. To ensure Snopes endures — and grows to serve more readers — we need a different kind of tip: We need your financial support.
Support Snopes so we continue to pursue the facts — for you and anyone searching for answers.