During a wide-ranging national debate on race and officer-involved police shootings following the deaths of Philando Castile and Alton Sterling, the New York Times‘ “Upshot” published an article reporting “surprising new evidence” indicating that police demonstrated bias in nearly every respect except use of lethal force and shooting.
The article pertained to research published by Harvard University professor of Economics Roland G. Fryer, Jr. and was described by the Times as a “study.” However, the Times linked to a “working paper” issued by the National Bureau of Economic Research (NBER), which is not a peer-reviewed journal but rather an economy-based research collective:
The National Bureau of Economic Research (NBER) is a private, nonprofit, nonpartisan research organization dedicated to promoting a greater understanding of how the economy works. The NBER is committed to undertaking and disseminating unbiased economic research among public policymakers, business professionals, and the academic community. The NBER’s greatest asset is its reputation for scholarly integrity. We expect our affiliated researchers to conduct their affairs in ways that will not compromise their reputations, nor reflect adversely on the integrity of the NBER.
That paper [PDF] began with disclosures that it was funded anonymously and (more important, had not yet been subject to peer review due to its status as a “working paper”:
Financial support from EdLabs Advisory Group and an anonymous donor is gratefully acknowledged. Correspondence can be addressed to the author by email at [redacted]. The usual caveat applies. The views expressed herein are those of the author and do not necessarily reflect the views of the National Bureau of Economic Research.
NBER working papers are circulated for discussion and comment purposes. They have not been peer-reviewed or been subject to the review by the NBER Board of Directors that accompanies official NBER publications.
Off the bat, it appeared the Times simply inaccurately described a research-in-progress paper penned by an individual as a “study from Harvard,” lending a gravitas that was not yet present. The newspaper again referred to Fryer’s research as “a study” in a followup titled titled “Roland Fryer Answers Reader Questions About His Police Force Study.” Fryer didn’t correct the paper’s fluid terminology, but in one portion he responded to a reader query asking if people “have to trust police reports to believe your study”:
As we state in the paper, there are certainly high-profile cases in which the facts stated by officers differed substantially from the videos. So let’s take a minute and think about how this sort of misreporting might bias our findings.
Although the difference might have seemed semantic in nature, the imprecise wording of the Times‘ original piece led a great many readers and news outlets to believe the research was peer-reviewed and published rather than being a work-in-progress unpublished by any scientific journal. It isn’t clear whether, absent the rigor applied to actual studies, Fryer’s findings will ultimately pass a review by peers or achieve published status. There is yet no indication Fryer’s work was necessarily sloppy, but neither has the typical oversight of a genuine study been applied to his working paper. In short, aside from the pedigree of an author who is also a Harvard economics professor, Fryer’s paper as it stands now isn’t much different in stature than any other article published online.
The status of the “study” was evident in sixth paragraph of the Times‘ 11 July 2016 article, which summed up the paper and noted it was “posted” to the Internet (not published in a journal):
A new study confirms that black men and women are treated differently in the hands of law enforcement. They are more likely to be touched, handcuffed, pushed to the ground or pepper-sprayed by a police officer, even after accounting for how, where and when they encounter the police.
But when it comes to the most lethal form of force — police shootings — the study finds no racial bias.
“It is the most surprising result of my career,” said Roland G. Fryer Jr., the author of the study and a professor of economics at Harvard. The study examined more than 1,000 shootings in 10 major police departments, in Texas, Florida and California.
The result contradicts the image of police shootings that many Americans hold after the killings (some captured on video) of Michael Brown in Ferguson, Mo.; Tamir Rice in Cleveland; Walter Scott in South Carolina; Alton Sterling in Baton Rouge, La.; and Philando Castile in Minnesota.
The study did not say whether the most egregious examples — those at the heart of the nation’s debate on police shootings — are free of racial bias. Instead, it examined a larger pool of shootings, including nonfatal ones.
The counterintuitive results provoked debate after the study was posted on Monday, mostly about the volume of police encounters and the scope of the data.
Subsequent paragraphs included conflicting language about the status of the paper, as well as its methodology:
The study, a National Bureau of Economic Research working paper, relied on reports filled out by police officers and on police departments willing to share those reports. Recent videos of police shootings have led to questions about the reliability of such accounts. But the results were largely the same whether or not Mr. Fryer used information from narratives by officers.
Methodology was a matter of dispute when it came to the paper and its findings: investigative journalist and law enforcement expert Radley Balko published a critical editorial in the Washington Post about the working paper, noting that core data may have been weighted to favor officer perspectives. Balko highlighted cues he spotted in the paper that suggested its findings may have been rooted in shaky reports:
There’s been much talk this week about a new study from Harvard economics professor Roland G. Fryer Jr. on racial bias in police shootings. Much of the coverage has focused on the study’s surprising-to-some conclusion that racial bias doesn’t factor into police use of lethal force, at least in the city of Houston and at least once the officer has stopped a civilian … But the most pertinent flaw in the study (which Fryer has tried to explain, I think unsatisfactorily) is the same flaw in any study that relies on police reports: It relies on police reports.
We want to reform policing. But we want those reforms to be informed, based on good data. The problem is that nearly all the data we have on incidents involving police officers using lethal force comes from reports written by police officers, and nearly all of those reports were written by the officers who were actually involved in those incidents.
For the purpose of the discussion, let’s break shootings and killings by police into three categories: incidents that were illegal and unnecessary, incidents that were legal and necessary, and incidents that were legal but unnecessary. If you’re asking whether current laws and policies allow for too many police shootings, looking at how many shootings are justified under current law and policy is just question begging. It’s that last category — legal but unnecessary — that we want to explore. Unfortunately, it’s also a category that is plagued by subjectivity and the simple fact noted above: Most of the data we have comes from police reports themselves.
If we were to compile statistics on, say, medical mistakes in an effort to make policies that would improve the state of medicine, we wouldn’t get all of our data from written statements by the accused doctors or hospitals. If we wanted to compile data on conflicts of interest in politics, we wouldn’t rely on politicians to self-report and adjudicate when their vote may have been influenced by a campaign donation. But this is essentially what we do with shootings by police officers … The argument here is not that there’s something uniquely untrustworthy about cops. The argument is that almost every police officer who has just shot and killed someone will defend his or her decision to kill. It’s human nature. It could be because the killing was entirely justifiable. It could be because the officer wants to believe it was justifiable. It could be because the officer knows it wasn’t justified, but fears the consequences.
Personally, I suspect that a high percentage — well more than half — of shootings by police are both legal and justified. I also suspect that nearly all cops who have just shot and killed someone truly believe that their actions were justified. That is, I suspect that the percentage of cases in which cops knowingly covered up a bad shoot is pretty small. But I also think it’s safe to say that the percentage of shootings by police that most of the public would find troubling, unnecessary or unjustifiable is far below the 99 percent or higher of cops that are cleared in these cases.
To underscore his point, Balko rattled off a number of officer-involved shootings that technically would have fallen under uncounted justifiable uses of force but fell apart under scrutiny. The last example he provided illustrated his distrust of the data used by Fryer:
I think of the Kathryn Johnston case, in which the police invented an informant and lied on a search warrant affidavit before breaking down the 92-year-old woman’s door. She was innocent. When she met them with the broken old revolver she used to scare off intruders, they shot and killed her. In the police report, she was an armed suspect who threatened them with a gun.
Again, none of this is to say this data is completely useless. We just need to be really cautious about how we use it, and realize that the numbers alone don’t always tell the story.
Vox published a critical counter-piece that didn’t take an approach as granular as Balko’s, noting that Fryer’s expertise was not in criminology and his cited motivations were rooted in a mistaken belief no such research had yet been done:
As a general rule, when somebody claims that a new academic study finally looks into the data behind a controversial news issue, you should be skeptical.
So when the Times article summarily dismisses existing data as “poor,” and doesn’t explain what that data actually is, that should be a red flag — a clue that the article’s author isn’t going to provide you with an explanation of why this new data is so much better than the old data, and you’re going to have to do that yourself.
When Fryer (an economist by training) tells the Times that he got interested in police shootings because of “his anger after the deaths of Michael Brown and Freddie Gray,” and (in Fryer’s words) “decided I was going to collect a bunch of data and try to understand what really is going on,” that should be another humongous red flag.
It implies that Fryer assumed he was doing something pioneering, rather than asking first what work was already being done and what he could add to the existing conversation. This is something that often happens when people in “quantitative” social sciences, like economics, develop an interest in topics covered in other social sciences — in this case, criminology: They assume that no rigorous empirical work is being done.
Vox also objected to data collection methods that may have inadvertently distorted Fryer’s findings based on the exclusion of potentially relevant incidents:
The most revealing passage in the Times article is probably the one explaining what Fryer and his team didn’t include in their study:
It focused on what happens when police encounters occur, not how often they happen. Racial differences in how often police-civilian interactions occur reflect greater structural problems in society.
In other words, Fryer and company found that there weren’t big racial disparities in how often black and white suspects who’d already been stopped by police were killed. But they deliberately avoided the question of whether black citizens are more likely to be stopped to begin with (they are) and whether they’re more likely to be stopped without cause (yup).
As both outlets noted, Fryer’s findings weren’t necessarily misleading, incorrect, or wrong, but there were numerous obvious problems with the bombastic manner in which the New York Times framed his paper (for starters). Fryer’s paper was neither published nor peer-reviewed, and it was certainly not a “Harvard study.” (A similar controversy erupted over a “Harvard study” on of gun rights was found to be a paper penned by supporters of that issue.)
Critics noted that Fryer’s sample size was exceedingly small (possibly skewing the results) and relied on the narratives of policemen and women party to officer-involved shootings. Moreover, Fryer’s background in economics was certainly useful for crunching data, but it lacked the scope and working knowledge present in criminologists and researchers in related fields. The paper is still a work in progress and hasn’t been fully vetted, but even in its “working” state it has been the target of multiple assessments indicating that its findings are far from complete.
- Since this article was published, the study in question was published in the peer-reviewed 'Journal of Political Economy,' October 2018.