How does Taco Bell come out on top when there are so many restaurants that are a lot better? Taco Bell is known nationwide. The really good Mexican restaurants are likely to be single establishments, or very small chains. If, say, 100,000 people in Southern California consider, say, El Coyote the best Mexican restaurant ever, well…
There are some 3000 counties in the US. If 100 people in each county answer “Taco Bell” in a survey, that’s 300,000 votes.
After Trump was elected, there was an entire movement to abolish the Electoral College for no other reason than Trump won and Hillary did not, popular vote, yada yada yada. Thank the good Lord we are not a pure democracy.
The Electoral College was designed to protect the country from populist uprisings and democratic mob rule. Simply because historically, democracies tend to disintegrate into chaos before destroying themselves.
There are many reasons why the Electoral College is amazing, wonderful, and should never be abolished on a political whim. Think pieces, original intent exposes…they all make important points, but none so enlightening as this — the same people that vote for president also voted Taco Bell the best Mexican restaurant in the country.
So, if ever you wondered, THIS, THIS IS WHY WE HAVE THE ELECTORAL COLLEGE.
A potential draft of new federal campus sexual assault policies was leaked this week, so expect a new round of false and misleading statistics to be shared by those who claim due process “protects rapists” and “hurts victims.”
Rape and sexual assault are serious offenses, and shouldn’t be watered down to create a narrative that America is somehow the rape capital of the world, nor should we pretend that non-offenses are offenses. That hurts real victims.
I’ve taken down every one of these statistics before — sometimes many, many times — but it’s time to debunk them all in one place. So here we go.
1-in-5 (or 1-in-4 or 1-in-3) Women Will Be Sexually Assaulted During College
Studies purporting to find such an astronomical amount of sexual violence on college campuses (numbers thousands of times higher than war-torn Congo or Detroit, America’s most dangerous city) suffer from many of the same flaws. They are often not nationally representative, are produced by women’s organizations determined to find women as oppressed victims in America, and are self-reported — a notoriously unreliable form of data.
The Majority Of Campus Rapes Are Committed By A Small Number Of Men
Sometimes known as the “serial predator” study, this one from David Lisak has been around for decades and was debunked just a few years ago. It claims that “90%” of rapes on campus are perpetrated by a few men.
For starters, Lisak didn’t conduct the study himself but used data from studies conducted by his former grad students, who didn’t limit their data to college students. As in the 1-in-5 stat above, this one was also not nationally representative, as the surveys were conducted near a commuter college with participants who didn’t live on campus and may not have even been students.
The surveys were anonymous, yet Lisak has claimed he conducted follow-up interviews with men who admitted to committing multiple rapes (one questions whether such admissions would be so freely given to a stranger in the first place). Lisak did conduct 12 interviews during his dissertation research three decades ago, but he then combined those cherry-picked interviews into a single character — called “Frank” — which he used to tell school administrators how dangerous their campuses were. No such monster as Frank actually exists, nor is he a common problem across the country.
False Accusations Are Rare
The truth is, we don’t know how many accusations are truly false, and even if we did, one can’t walk into an investigation assuming they already know the answer.
We’re often told that “just” 2% to 10% of rape accusations are false. College administrators are told this when “trained” on how to handle accusations of sexual assault. The implication is clear: Women just don’t lie about rape, so nine times out of ten, you’d be safe in assuming the accused is guilty.
But that statistic is wildly misleading, as it only applies to accusations made to police that are proven false. Proving a negative is often impossible, especially in a “we had sex but it was consensual” situation. On college campuses, there is no punishment for a false accusation and thus no fear, as there is with lying to the police.
Further, the proven false statistic is one category of sexual assault classifications. The other categories do not all equate to “true,” so implying that 90% to 98% are true is downright false and prejudicial. Other categories include “baseless,” wrongly reported as sexual assault, cases without enough evidence for an arrest, cases with enough evidence but for some reason outside police control an arrest is not made, and cases where there is enough evidence for an arrest. Of the cases that lead to an arrest, a small percentage actually go to trial and result in a “guilty” finding.
Using the same logic as the peddlers of this statistic, one would only be able to say that 3% to 5% of rape accusations are true, since that’s how many return a “guilty” finding.
It’s Bad That 91% Of Colleges And Universities Said They Received No Rape Reports
I include this one because while one would think it would be a good thing that reports of sexual assault aren’t rampant on college campuses, the “scholars” at the American Association of University Women think it’s a bad thing. Because they’ve thoroughly bought into the debunked statistics above, no reports must mean that schools are somehow discouraging victims from coming forward or are sweeping reports under the rug. It’s hard to believe either of these is the case when the media, lawmakers, federal institutions, and Hollywood are constantly claiming huge swaths of the female population are sexually assaulted on college campuses and begging people to come forward.
1-in-3 Men Would Rape If They Could Get Away With It
This statistic was quickly debunked as soon as it appeared in 2015. A woman who admitted to me at the time that she was seeking grant money (a good motive for finding alarming statistics in one’s survey) claimed her study found that a whopping one-third of surveyed men had “intentions to force a woman to sexual intercourse.”
Wow, right? Except, as I’ve pointed out with previous misleading statistics, this one suffers from many of the same flaws. It’s not nationally representative, and the answers of just 73 men were used to arrive at the 1-in-3 number blasted out by the media and women’s groups. Of those 73 men, 23 were found to have those intentions, based on the researchers own definition of what constituted bad intentions. Just nine guys said they would actually rape a woman. Nine guys do not an epidemic make.
These guys may not have been taking the survey seriously or they were answering a question from Plato’s Republic: How many people would commit a crime if they knew they wouldn’t be caught? One would believe many people would answer affirmatively to such questions about various laws, but that doesn’t mean they’d actually commit them. One can never know if they will get away with it.
Study: Google bias in search results; 40% lean left or liberal
In order to assess how fairly search engine results portray political candidates and controversial issues, we collected over 1,200 URLs ranking highly in Google.com for politically-charged keywords such as “gun control”, “abortion”, “TPP”, and “Black Lives Matter”. Each URL was then assessed for political slant by politically active individuals from both the left and right. Finally, we used CanIRank’s SEO software to analyze how each URL compared in dozens of different ranking factors to determine whether Google’s algorithm treated websites similarly regardless of their political slant.
Among our key findings were that top search results were almost 40% more likely to contain pages with a “Left” or “Far Left” slant than they were pages from the right. Moreover, 16% of political keywords contained no right-leaning pages at all within the first page of results.
Our analysis of the algorithmic metrics underpinning those rankings suggests that factors within the Google algorithm itself may make it easier for sites with a left-leaning or centrist viewpoint to rank higher in Google search results compared to sites with a politically conservative viewpoint.
In our sample set of over 2,000 search results, we found that searchers are 39% more likely to be presented information with a left-leaning bias than they are information from the right.
But for some keywords, the search results are even more egregious. Does it make sense, for example, that someone researching “Republican platform” should be presented only the official text of the platform and seven left-leaning results highly critical of that platform, with zero results supporting it?
For other controversial keywords like “minimum wage”, “abortion,” “NAFTA”, “Iraq war”, “campaign finance reform”, “global warming”, “marijuana legalization”, and “tpp”, no right-leaning websites are to be found among the top results.
Search results for keywords like “Hillary Clinton seizures” and “Hillary Clinton sick”, on the other hand, were dominated by right-leaning websites and YouTube footage.
The proportion of results with a left-leaning bias increased for top ranking results, which typically receive the majority of clicks. For example, we found that search results denoted as demonstrating a left or far left slant received 40% more exposure in the top 3 ranking spots than search results considered to have a right or far right political slant.
Lots of really good comments, and the author hangs around to respond.
Links to the studies are in the piece.
Led by Musa al-Gharbi, a Columbia University sociologist, “On Social Research in the Age of Trump” analyzes three case studies of academic research on Trump to illustrate the various ways that academics have misrepresented the president and his voter base to the public.
One example of this phenomena can be seen in the April 2017 Washington Post article “Racism motivated Trump voters more than authoritarianism,” by Thomas Wood, who teaches political science classes at Ohio State University.
While Wood cites survey data to claim that Trump voters were especially motivated by racism, a closer analysis by al-Gharbi reveals that Wood’s arguments about Trump voters can’t be substantiated from the data cited in the article.
“According to Wood’s own data, whites who voted for Trump are perhaps less racist than those who voted for Romney,” al-Gharbi explains, adding that “not only were they less authoritarian than Romney voters, but less racist too!”
An “effect” is defined as “a phenomenon that follows and is caused by some previous phenomenon”. We have to be careful when labeling something as an “effect”, to make sure we’re pretty sure we’ve correctly assigned causation.
That being said, we have a phenomenon called the “mismatch effect”. When people are placed in a competitive environment on the basis of something other than their merits or qualifications, they find it much more difficult to keep up and wind up in the bottom rank or drop out entirely.
In the case of programs that aim to increase the number of certain groups admitted to top-rank colleges, we often find that members of these groups have lower average test scores or GPAs than the average among those who graduate from those schools.
We can imagine that of people who enroll in any school, there will be an average score and some variation around that average. It may well be that, say, two thirds of those admitted will do well and graduate. The students who fail to graduate will most likely be in the bottom third of that distribution. Now imagine a mismatched group (we’ll pick on Martians) is admitted on the basis of some sort of affirmative action program. Suppose their average test scores are one standard deviation lower than the average for the rest of the school. In this case, nearly 72% of these students will have test scores that place them below the cut-off for “likely to graduate”. This is not to say that no Martians will graduate from that school, merely that far fewer will, and at a rate that’s not at all similar to the graduation rate for non-Martians.
At the Volokh Conspiracy, Eugene Volokh offers:
That’s the debate about the “mismatch effect,” which I’ve followed over the years (though from a distance); it has mostly focused on whether race-based affirmative action causes problems (such as lower black bar passage rates) as a result of this effect, but it can also be relevant to many students of all races. I was first exposed to it because of the work of my UCLA School of Law colleague Rick Sander, and Robert Steinbuch at Arkansas / Little Rock has been working in it as well; Rob has been kind enough to pass along these thoughts on the subject:
Analysis of a large dataset containing information on graduates from the law school at which I teach, the University of Arkansas at Little Rock, Bowen School of Law, demonstrates that LSAT scores of students enrolled at the school (1) solidly predicted bar passage, and (2) varied significantly in relation to ethnicity.
Although color-blind admissions should produce roughly 25 percent of both Whites and African Americans in each LSAT-score quartile, over two-thirds of graduating African Americans were admitted with LSAT scores in the bottom quartile, as contrasted with only 16 percent for White students. (For more details, see the recent article I coauthored: Steinbuch and Love, Color-Blind-Spot: The Intersection of Freedom of Information Law and Affirmative Action in Law School Admissions, 20 Tex. Rev. L. & Pol. 1 (2016)). Although almost exactly a quarter of White students were admitted in the top quartile of LSAT scores (as expected), remarkably, only one percent of enrolled African Americans fell into the top quartile of LSAT scores. Predictably, this led to dramatic differences in bar passage: 80 percent of Whites passed the bar (the first time), while only 60 percent of African Americans did.
Given that the African-American cohort in our dataset on average had much lower LSAT scores than the bulk of the student body, it’s fair to conclude that this cohort overall was mismatched. This profile dominated because affirmative-action considerations are designed to consider factors beyond traditional credentials and explains why debates on how to deal with poor bar-passage rates often focus on race-based admissions. However, the ensuing discussion often misses that, while on average Whites will not be mismatched because they have such a large population — putting many at or above the mean of the class, the number of Whites who are mismatched could easily equal or exceed that of any other racial group.
And also from the Volokh Conspiracy, Rick Sander offers this:
Williams’s paper presents equations testing dozens of different combinations of models and outcomes. With impressive consistency, his analysis shows very powerful evidence for law school mismatch, especially for first-time takers. His results are all the more compelling because, as Arcidiacono and Lovenheim point out, the weaknesses of the BPS data bias all analyses against a finding of mismatch. Williams concludes his piece, too, with a plea for the release of better data.
Meanwhile, not a single one of the law school mismatch critics has managed to publish their results in a peer-reviewed journal, though at least some of them have tried. As I will discuss in another post, many of these critics still shrilly hold to their earlier views. But it should be clear now to any reasonable observer that mismatch is a serious issue that the legal academy needs to address.
The above references two survey-scale papers, both taking great effort to eliminate ideological bias. Links are in the cited piece.
From Just Facts Daily, we get:
Illegal Immigrants Are Far More Likely to Commit Serious Crimes Than the U.S. Public
… the Associated Press published a “fact check“ claiming that illegal immigrants are more law-abiding than the general public. Various media outlets, such as the New York Times, Yahoo!, and a number of NBC affiliates published this article. The Washington Post ran a similar story, and other media outlets and so-called fact checkers have made similar claims in the past.
The truth, however, is that comprehensive, straightforward facts from primary sources—namely the Obama administration Census Bureau and Department of Justice—prove that illegal immigrants are far more likely to commit serious crimes than the U.S. population. Studies that claim otherwise typically suffer from fallacies condemned by academic publications about how to accurately analyze data.
The Most Concrete Facts
Data on illegal immigration and crime is often clouded, precisely because these are unlawful activities where perpetrators seek to hide their actions. Also, governments sometimes fail to record or release information that could be or has been obtained. The Obama administration, in particular, refused to release the names of convicted immigrant sex offenders and hid other details about crimes committed by immigrants.
Nonetheless, a combination of three material facts sheds enough light on this issue to draw some firm conclusions.
First, U.S. Census data from 2011 to 2015 shows that noncitizens are 7% more likely than the U.S. population to be incarcerated in adult correctional facilities. This alone debunks the common media narrative, but it only scratches the surface of serious criminality by illegal immigrants.
Second, Department of Justice data reveals that in the decade ending in 2015, the U.S. deported at least 1.5 million noncitizens who were convicted of committing crimes in the U.S. (Table 41). This amounts to 10 times the number of noncitizens in U.S. adult correctional facilities during 2015.
Third, Department of Justice data shows that convicts released from prison have an average of 3.9 prior convictions, not including convictions that led to their imprisonment (Table 5). This means that people in prison are often repeat offenders—but as shown by the previous fact, masses of convicted criminals have been deported, making it hard for them to reoffend and end up in a U.S. prison.
In other words, even after deporting 10 times more noncitizens convicted of crimes than are in U.S. prisons and jails, they are still 7% more likely to be incarcerated than the general public. This indicates a level of criminality that is multiplicatively higher than the U.S. population.
Furthermore, roughly half of noncitizens are in the U.S. legally, and legal immigrants rarely commit crimes. This is because U.S. immigration laws are designed to keep criminals out. Thus, the vast majority of incarcerated noncitizens are doubtlessly illegal immigrants. If legal immigrants were removed from the equation, the incarceration rate of illegal immigrants would probably be about twice as high as for all noncitizens.
On the other hand, there is uncertainty about the exact number of noncitizens in the U.S., and Census figures are almost surely low. Hence, the incarceration rate of illegal immigrants is likely not twice as high as the U.S. population. Nevertheless, this is only the tip of the iceberg, because the U.S. continually deports massive numbers of illegal immigrant convicts.
n April 4th headline in the New York Times was eye-catching: “Government Watchdog Finds Racial Bias in School Discipline.” Eye-catching, but highly misleading. The Government Accountability Office report, which was commissioned by congressmen Bobby Scott (D., Va.) and Jerrold Nadler (D., N.Y.), found only what we’ve known for a long time — that African-American students are disciplined at higher rates than white students. Buried in a footnote, the GAO report concedes that disparities by themselves “should not be used to make conclusions about the presence or absence of unlawful discrimination.”
The fact that concession was relegated to a footnote is not the only reason to doubt the GAO’s good faith. Education secretary Betsy DeVos is currently considering whether to withdraw the Obama administration’s controversial “Dear Colleague” letter on school discipline. That letter told schools that their federal funding can be cut off if they discipline African-American students at higher rates than white students, even if the difference is the result of the evenhanded administration of their disciplinary code. The GAO report was released to great fanfare on the same day that DeVos met with interested parties on both sides of the issue. The timing suggests GAO officials may have been all too happy to upstage DeVos.
Here’s what the GAO didn’t disclose: The major reason for the disparity is clear, and it isn’t bias. As painful as it may be to admit, African-American students, on average, misbehave more than their white counterparts. Teachers (including African-American teachers) aren’t making this up, and it isn’t doing African-American students any favors to suggest otherwise.
Just recently, the National Center for Education Statistics released a report showing that African-American students self-report being in physical fights on school property at a rate more than twice that of white students. Similarly, California’s former attorney general (and current senator) Kamala Harris reported in 2014 that African-American fifth-graders are almost five times more likely than whites to be chronically truant. In addition, as the Manhattan Institute’s Heather Mac Donald has reported, African-American male teenagers from ages 14-17 commit homicide at nearly ten times the rate of their white male counterparts. Why should anyone assume that rates of misbehavior in school would magically come out equal?
Too many of our leaders like to preen themselves, claiming that they can’t imagine why teachers would disproportionately discipline African-American students unless the reason is racial discrimination. But denying the facts doesn’t help African-American students. The primary victims of the Obama administration’s effort to federalize school-discipline policy are African-American students attending majority-minority schools who are struggling to learn amid increasing classroom disorder.
Why causes these differences in behavior? The short answer is that nobody can explain it perfectly. But common sense suggests, and reams of research show, that children from fatherless households as well as children from economically disadvantaged backgrounds are more likely to get in trouble than other students. That’s at least a large part of the explanation.
The GAO tries to cast doubt on that by arguing that even in schools in prosperous neighborhoods, African-American students are disciplined at higher rates than whites. But the fact that a school is in a relatively prosperous locality doesn’t mean that the African-American students attending it are as well-off as their fellow students.
There is no clear correlation whatsoever between gun ownership rate and gun homicide rate. Not within the USA. Not regionally. Not internationally. Not among peaceful societies. Not among violent ones. Gun ownership doesn’t make us safer. It doesn’t make us less safe. The correlation simply isn’t there. It is blatantly not-there. It is so tremendously not-there that the “not-there-ness” of it alone should be a huge news story.
And anyone with access to the internet and a basic knowledge of Microsoft Excel can check for themselves. Here’s how you do it.
First, go to the Wikipedia page on firearm death rates in the United States. If you don’t like referencing Wikipedia, then instead go to this study from the journal Injury Prevention, which was widely sourced by media on both the left and right after it came out, based on a survey of 4000 respondents. Then go to this table published by the FBI, detailing overall homicide rates, as well as gun homicide rates, by state. Copy and paste the data into Excel, and plot one versus the other on a scatter diagram. Alternately, do the whole thing on the back of a napkin. It’s not hard. Here’s what you get:
This looks less like data and more like someone shot a piece of graph paper with #8 birdshot.
If the data were correlated, we should be able to develop a best fit relationship to some mathematical trend function, and calculate an “R^2 Value,” which is a mathematical way of describing how well a trendline predicts a set of data. R^2 Values vary between 0 and 1, with 1 being a perfect fit to the data, and 0 being no fit. The R^2 Value for the linear trendline on this plot is 0.0031. Total garbage. No other function fits it either.
I embellished a little with the plot, coloring the data points to correspond with whether a state is “red,” “blue,” or “swing,” according to the Romney-Obama era in which political demarcations were a little more even and a little more sensical. That should give the reader a vague sense of what the gun laws in each state are like. As you can see, there is not only no correlation whatsoever with gun ownership rate, there’s also no correlation whatsoever with state level politics.
But hey, we are a relatively unique situation on the planet, given our high ownership rates and high number of guns owned per capita, so surely there’s some supporting data linking gun ownership with gun homicide elsewhere, right?
So off we go to Wikipedia again, to their page listing countries by firearm related death rates. If Wikipedia gives you the willies, you’re going to have a harder time compiling this table on your own, because every line in it is linked to a different source. Many of them, however, come from http://www.gunpolicy.org. Their research is supported by UNSCAR, the UN Trust Facility Supporting Cooperation on Arms Regulation, so it is probably pretty reasonable data. They unfortunately do not have gun ownership rates, but do have “guns owned per 100 inhabitants,” which is a similar set we can compare against. And we drop that into Excel, or use the back of our napkin again, and now we are surely going to see how gun ownership drives gun homicide.
Well that’s disappointing.
Remember we are looking for an R^2 value close to 1, or hopefully at least up around 0.7. The value on this one is 0.0107. Garbage.
So let’s briefly recap. Gun Murder Rate is not correlated with firearm ownership rate in the United States, on a state by state basis. Firearm Homicide Rate is not correlated with guns per capita globally. It’s not correlated with guns per capita among peaceful countries, nor among violent countries, nor among European countries. So what in the heck is going on in the media, where we are constantly berated with signaling indicating that “more guns = more murder?”
One: They’re sneaking suicide in with the data, and then obfuscating that inclusion with rhetoric.
This is the biggest trick I see in the media, and very few people seem to pick up on it. Suicide, numerically speaking, is around twice the problem homicide is, both in overall rate and in rate by gun. Two thirds of gun deaths are suicides in the USA. And suicide rates are correlated with gun ownership rates in the USA, because suicide is much easier, and much more final, when done with a gun. If you’re going to kill yourself anyway, and you happen to have a gun in the house, then you choose that method out of convenience. Beyond that, there’s some correlation between overall suicide and gun ownership, owing to the fact that a failed suicide doesn’t show up as a suicide in the numbers, and suicides with guns rarely fail.
Two: They’re cooking the homicide data.
The most comprehensive example of this is probably this study from the American Journal of Public Health. It’s widely cited, and was very comprehensive in its analytical approach, and was built by people I admire and whom I admit are smarter than me. But to understand how they ended up with their conclusions, and whether those conclusions actually mean what the pundits say they mean, we have to look at what they actually did and what they actually concluded.
First off, they didn’t use actual gun ownership rates. They used fractional suicide-by-gun rates as a proxy for gun ownership. This is apparently a very common technique by gun policy researchers, but the results of that analysis ended up being very different from the ownership data in the Injury Prevention journal in my first graph of the article. The AJPH study had Hawaii at 25.8% gun ownership rate, compared to 45% in IP, and had Mississippi at 76.8% gun ownership rate, compared to 42.8% in IP. Could it be that suicidal people in Hawaii prefer different suicide methods than in Mississippi, and that might impact their proxy? I don’t know, but it would seem to me that the very use of a proxy at all puts the study on a very sketchy foundation. If we can’t know the ownership rate directly, then how can we check that the ratio of gun suicides properly maps over to the ownership rate? Further, the fact that the rates are so different in the two studies makes me curious about the sample size and sampling methods of the IP study. We can be absolutely certain that at least one of these studies, if not both of them, are wrong on the ownership rate data set. We know this purely because the data sets differ. They can’t both be right. They might both be wrong.
In the second article, we unpack “gun death” statistics and look carefully at suicide.
In the third article, we debunk the “gun homicide epidemic” myth.
In the fourth article, we expand upon why there is no link between gun ownership and gun homicide rate, and why gun buybacks and other gun ownership reduction strategies cannot work.
In the fifth article, we discuss why everyone should basically just ignore school shootings.
Paul Cassell co-authored an article showing how the reduction in “stop and frisk” activity in Chicago corresponded with a significant increase in homicides. Needless to say, this article provoked comment. It seems some people are uncomfortable with the notion that it might have actually worked, even if minorities were being targeted.
This addresses comments by John Pfaff and the ACLU.
The Volokh Conspiracy by Paul Cassell
On Monday, I discussed Professor Fowles and my article about what caused the 2016 Chicago homicide spike. Our paper argued that the causal mechanism was likely an ACLU consent decree with the Chicago Police Department, which led to a sharp decline in stop and frisks—and, we believe, a consequent sharp increase in homicides (and other shooting crimes). Since our paper was announced in The Chicago Tribune, distinguished law professor John Pfaff has tweeted a series of comments about our article, and the ACLU has commented as well. I wanted to briefly respond.
Turning first to Professor Pfaff’s tweets, it is useful to start with several points of agreement. Professor Pfaff notes that the causal mechanism we propose—an ACLU agreement leads to fewer stops, fewer stops leads to more crime—is “wholly plausible.” So far, so good.
But then Pfaff moves on to criticize us because our model “has only a handful of variables, almost all of them official criminal justice statistics, no social-economic statistics, and all at the city level (despite the intense concentration of violence in Chicago).” Let’s address these concerns specifically.
First, as to the explanatory variables in our equations: In our most extensive model, we employ twenty variables—specifically stop and frisks (of course); temperature (since crime tends to spike in warm weather months); 911 calls (as a measure of police-citizen cooperation); homicides in Illinois excluding Chicago (as a measure of trends in Illinois); arrests for property crimes, violent crimes, homicides, gun crimes, shooting crimes, and drug crimes; homicides in St. Louis, Columbus, Louisville, Indianapolis, Grand Rapids, Gary, Cincinnati, Cleveland, and Detroit; and a time trend variable. All of these variables were based on monthly data, since we were attempting to explain homicide data reported on monthly basis. Interestingly, Professor Pfaff does not suggest any other readily-available monthly data that we could have included. Nor is it clear what sort of “socio-economic” statistics would have been relevant to explaining the homicide spike, which developed over a short period of time. It is true that our variables are not collected at the neighborhood level, but the city-wide level. But since our goal was to explain the Chicago homicide spike, there is nothing intrinsically wrong with looking at Chicago data.
The one specific variable that Professor Pfaff argues we failed to include was the “defunding of Cure Violence [a violence prevention program], which happened at the same time” as the spike. But it is curious that Professor Pfaff would take us to task for failing to look at this issue when, at the same time, he argues that the “best analysis” of the homicide spike was done by the University of Chicago Urban Lab. That (ultimately inconclusive) report specifically stated that “earlier in 2015, state funding for Cure Violence, a violence prevention organization operating in Chicago, was suspended, although the timing of that funding reduction does not seem to fit well as a candidate explanation for the increase in gun violence since the latter occurred at the end of 2015.”
Professor Pfaff also mentions that our regression equations simply include (in one model) homicides rates in other cities, without developing difference-in-difference variables or synthetic controls. But there are advantages to parsimonious construction. We doubt whether such controls would have made any difference to our conclusions. Moreover, we relied on Bayesian Model Averaging (BMA) as, at least, a partial response to such concerns. We would be interested to learn what Pfaff thinks of our BMA findings—which compellingly demonstrate our findings’ robustness within the included variables.
Professor Pfaff also raises a question about whether we have measured an “ACLU effect” or a “stop and frisk” effect. It is true, of course, that our regression equations explain homicides (and shooting crimes) by using stop and frisk as an explanatory variable. A linkage between stop-and-frisk tactics and homicides is an important finding in and of itself—a finding with which we hope Professor Pfaff might, to some degree, agree. But the logical next question is why did stop and frisks fall in Chicago at the end of 2015? This question is not as well suited to quantitative analysis as other questions, since it appears to be policy-driven. In any event, as Professor Pfaff even-handedly notes, we provide a qualitative defense of our position that the ACLU agreement caused the reduction in stop and frisks. Among other things, this is what the ACLU itself said—at least before the reduction became controversial.
Professor Pfaff also wonders why we do not attempt to quantify the costs of aggressive policing. Our paper explicitly addressed this point, agreeing that proactive policing has costs. But as anyone who has read the stop and frisk literature is well aware, many previous articles have articulated those costs. Our (perhaps already too-lengthy) paper focused on the other side of the cost-benefit equation, hoping to spark a discussion about how to strike a balance among competing concerns.
This issue of balancing competing concerns leads Professor Pfaff to raise a cautionary note about whether our findings are simply, as he puts it, a “Constitutional Effect” rather than an “ACLU Effect.” If things were so starkly simple as saying that all the additional stop and frisks that CPD conducted in 2015 compared to 2016 were unconstitutional, Pfaff might have an argument. But, again, our paper was more limited. The ACLU has justified its efforts to reduce stop and frisks, in part, by making the policy argument that there is “no discernible link between the rate of invasive street stops and searches by police and the level of violence . . . There simply is not any evidence of this so-called [ACLU] effect.” We believe it is fair to respond specifically to ACLU’s claim as part of what must necessarily be a much broader discussion about what are “unreasonable searches and seizures.”
We are encouraged by the fact that Professor Pfaff, based in New York City, is concerned about a common argument advanced about the efficacy of stop and frisk in fighting gun violence—that New York’s experience proves that no such linkage exists. We explained at length in our paper differences between New York and Chicago:
In 2016, New York’s homicide rate was only 3.9 per 100,000 population, while Chicago’s was 27.8—a rate more than 600% higher. But the relevant differences between the two cities may be even higher than this already staggering difference suggests. Looking at homicides committed by firearms, in 2016 New York’s rate was 2.3 compared to Chicago’s rate of 25.1—a rate more than 1000% higher. This is important because, as discussed earlier, gun crimes may be particularly sensitive to stop and frisk policies. In addition, because New York has such a small number of guns and gun crimes (relative to Chicago and many other cities), it can concentrate resources on preventing gun crimes in a way that other cities cannot….
Another problem in equating New York’s circumstances with Chicago’s is that the level of police power is different. Famously, New York has high levels of law enforcement. . . New York had about 153 law enforcement employees for every homicide committed in the city, while Chicago had only about 17 employees for every homicide committed—about a 800% difference. The difference is even greater if one combines both the gun homicide and police force numbers. Per gun homicide, New York has roughly 260 employees, while Chicago has only 19—well over a 1000% difference. To this point it might be objected that a homicide is a homicide, so it makes no sense to break out gun homicides separately. But homicides are not all alike. To the contrary, in general, homicides committed by firearms are more difficult to solve than other kinds of homicides, only adding to the relative difficulties for the Chicago Police Department. Moreover, in 2016, about 23% of New York’s homicides were gang-related, while roughly 67% (or more) of Chicago’s homicides and shootings appear to have been gang-related. Here again, gang-related homicides may be more difficult to solve than are other homicides, particularly in Chicago.
Professor Pfaff notes that our arguments distinguishing Chicago from New York “deserve attention.”
In several concluding tweets, Professor Pfaff wonders about whether homicides “spiked” in Chicago? Or did they rise steadily? Here we have a section of our paper that quantitatively analyzes this point in detail. After seasonally adjusting the data, we are able to perform a standard structural break analysis on our four dependent variables: homicides, fatal shootings, non-fatal shootings, and total shootings. We are able to find structural breaks in all four data series in and around November 2015.
In responding to each of Professor Pfaff’s questions to us, it may be fair to pose a single question back to him. Based on our review of on-the-street reports from Chicago, regression analysis of the available data, qualitative analysis of possible “omitted variables,” and relevant criminology literature, we believe that the best explanation for the 2016 Chicago homicide spike a was reduction in stop and frisks triggered by the ACLU consent decree. If this isn’t the best explanation, is there a better one?
The ACLU of Illinois has also commented on our paper. Some of the arguments that the ACLU raises are surprising, because the ACLU does not acknowledge that we addressed them at length in our paper.