Paul Cassell co-authored an article showing how the reduction in “stop and frisk” activity in Chicago corresponded with a significant increase in homicides. Needless to say, this article provoked comment. It seems some people are uncomfortable with the notion that it might have actually worked, even if minorities were being targeted.
This addresses comments by John Pfaff and the ACLU.
The Volokh Conspiracy by Paul Cassell
On Monday, I discussed Professor Fowles and my article about what caused the 2016 Chicago homicide spike. Our paper argued that the causal mechanism was likely an ACLU consent decree with the Chicago Police Department, which led to a sharp decline in stop and frisks—and, we believe, a consequent sharp increase in homicides (and other shooting crimes). Since our paper was announced in The Chicago Tribune, distinguished law professor John Pfaff has tweeted a series of comments about our article, and the ACLU has commented as well. I wanted to briefly respond.
Turning first to Professor Pfaff’s tweets, it is useful to start with several points of agreement. Professor Pfaff notes that the causal mechanism we propose—an ACLU agreement leads to fewer stops, fewer stops leads to more crime—is “wholly plausible.” So far, so good.
But then Pfaff moves on to criticize us because our model “has only a handful of variables, almost all of them official criminal justice statistics, no social-economic statistics, and all at the city level (despite the intense concentration of violence in Chicago).” Let’s address these concerns specifically.
First, as to the explanatory variables in our equations: In our most extensive model, we employ twenty variables—specifically stop and frisks (of course); temperature (since crime tends to spike in warm weather months); 911 calls (as a measure of police-citizen cooperation); homicides in Illinois excluding Chicago (as a measure of trends in Illinois); arrests for property crimes, violent crimes, homicides, gun crimes, shooting crimes, and drug crimes; homicides in St. Louis, Columbus, Louisville, Indianapolis, Grand Rapids, Gary, Cincinnati, Cleveland, and Detroit; and a time trend variable. All of these variables were based on monthly data, since we were attempting to explain homicide data reported on monthly basis. Interestingly, Professor Pfaff does not suggest any other readily-available monthly data that we could have included. Nor is it clear what sort of “socio-economic” statistics would have been relevant to explaining the homicide spike, which developed over a short period of time. It is true that our variables are not collected at the neighborhood level, but the city-wide level. But since our goal was to explain the Chicago homicide spike, there is nothing intrinsically wrong with looking at Chicago data.
The one specific variable that Professor Pfaff argues we failed to include was the “defunding of Cure Violence [a violence prevention program], which happened at the same time” as the spike. But it is curious that Professor Pfaff would take us to task for failing to look at this issue when, at the same time, he argues that the “best analysis” of the homicide spike was done by the University of Chicago Urban Lab. That (ultimately inconclusive) report specifically stated that “earlier in 2015, state funding for Cure Violence, a violence prevention organization operating in Chicago, was suspended, although the timing of that funding reduction does not seem to fit well as a candidate explanation for the increase in gun violence since the latter occurred at the end of 2015.”
Professor Pfaff also mentions that our regression equations simply include (in one model) homicides rates in other cities, without developing difference-in-difference variables or synthetic controls. But there are advantages to parsimonious construction. We doubt whether such controls would have made any difference to our conclusions. Moreover, we relied on Bayesian Model Averaging (BMA) as, at least, a partial response to such concerns. We would be interested to learn what Pfaff thinks of our BMA findings—which compellingly demonstrate our findings’ robustness within the included variables.
Professor Pfaff also raises a question about whether we have measured an “ACLU effect” or a “stop and frisk” effect. It is true, of course, that our regression equations explain homicides (and shooting crimes) by using stop and frisk as an explanatory variable. A linkage between stop-and-frisk tactics and homicides is an important finding in and of itself—a finding with which we hope Professor Pfaff might, to some degree, agree. But the logical next question is why did stop and frisks fall in Chicago at the end of 2015? This question is not as well suited to quantitative analysis as other questions, since it appears to be policy-driven. In any event, as Professor Pfaff even-handedly notes, we provide a qualitative defense of our position that the ACLU agreement caused the reduction in stop and frisks. Among other things, this is what the ACLU itself said—at least before the reduction became controversial.
Professor Pfaff also wonders why we do not attempt to quantify the costs of aggressive policing. Our paper explicitly addressed this point, agreeing that proactive policing has costs. But as anyone who has read the stop and frisk literature is well aware, many previous articles have articulated those costs. Our (perhaps already too-lengthy) paper focused on the other side of the cost-benefit equation, hoping to spark a discussion about how to strike a balance among competing concerns.
This issue of balancing competing concerns leads Professor Pfaff to raise a cautionary note about whether our findings are simply, as he puts it, a “Constitutional Effect” rather than an “ACLU Effect.” If things were so starkly simple as saying that all the additional stop and frisks that CPD conducted in 2015 compared to 2016 were unconstitutional, Pfaff might have an argument. But, again, our paper was more limited. The ACLU has justified its efforts to reduce stop and frisks, in part, by making the policy argument that there is “no discernible link between the rate of invasive street stops and searches by police and the level of violence . . . There simply is not any evidence of this so-called [ACLU] effect.” We believe it is fair to respond specifically to ACLU’s claim as part of what must necessarily be a much broader discussion about what are “unreasonable searches and seizures.”
We are encouraged by the fact that Professor Pfaff, based in New York City, is concerned about a common argument advanced about the efficacy of stop and frisk in fighting gun violence—that New York’s experience proves that no such linkage exists. We explained at length in our paper differences between New York and Chicago:
In 2016, New York’s homicide rate was only 3.9 per 100,000 population, while Chicago’s was 27.8—a rate more than 600% higher. But the relevant differences between the two cities may be even higher than this already staggering difference suggests. Looking at homicides committed by firearms, in 2016 New York’s rate was 2.3 compared to Chicago’s rate of 25.1—a rate more than 1000% higher. This is important because, as discussed earlier, gun crimes may be particularly sensitive to stop and frisk policies. In addition, because New York has such a small number of guns and gun crimes (relative to Chicago and many other cities), it can concentrate resources on preventing gun crimes in a way that other cities cannot….
Another problem in equating New York’s circumstances with Chicago’s is that the level of police power is different. Famously, New York has high levels of law enforcement. . . New York had about 153 law enforcement employees for every homicide committed in the city, while Chicago had only about 17 employees for every homicide committed—about a 800% difference. The difference is even greater if one combines both the gun homicide and police force numbers. Per gun homicide, New York has roughly 260 employees, while Chicago has only 19—well over a 1000% difference. To this point it might be objected that a homicide is a homicide, so it makes no sense to break out gun homicides separately. But homicides are not all alike. To the contrary, in general, homicides committed by firearms are more difficult to solve than other kinds of homicides, only adding to the relative difficulties for the Chicago Police Department. Moreover, in 2016, about 23% of New York’s homicides were gang-related, while roughly 67% (or more) of Chicago’s homicides and shootings appear to have been gang-related. Here again, gang-related homicides may be more difficult to solve than are other homicides, particularly in Chicago.
Professor Pfaff notes that our arguments distinguishing Chicago from New York “deserve attention.”
In several concluding tweets, Professor Pfaff wonders about whether homicides “spiked” in Chicago? Or did they rise steadily? Here we have a section of our paper that quantitatively analyzes this point in detail. After seasonally adjusting the data, we are able to perform a standard structural break analysis on our four dependent variables: homicides, fatal shootings, non-fatal shootings, and total shootings. We are able to find structural breaks in all four data series in and around November 2015.
In responding to each of Professor Pfaff’s questions to us, it may be fair to pose a single question back to him. Based on our review of on-the-street reports from Chicago, regression analysis of the available data, qualitative analysis of possible “omitted variables,” and relevant criminology literature, we believe that the best explanation for the 2016 Chicago homicide spike a was reduction in stop and frisks triggered by the ACLU consent decree. If this isn’t the best explanation, is there a better one?
The ACLU of Illinois has also commented on our paper. Some of the arguments that the ACLU raises are surprising, because the ACLU does not acknowledge that we addressed them at length in our paper.