Source: Pandora’s Gun – Cedar Writes
The editor of Analog Magazine pointed out that it’s impossible to disarm a technologically advanced society, even one that wants to be disarmed.
Source: Pandora’s Gun – Cedar Writes
The editor of Analog Magazine pointed out that it’s impossible to disarm a technologically advanced society, even one that wants to be disarmed.
n April 4th headline in the New York Times was eye-catching: “Government Watchdog Finds Racial Bias in School Discipline.” Eye-catching, but highly misleading. The Government Accountability Office report, which was commissioned by congressmen Bobby Scott (D., Va.) and Jerrold Nadler (D., N.Y.), found only what we’ve known for a long time — that African-American students are disciplined at higher rates than white students. Buried in a footnote, the GAO report concedes that disparities by themselves “should not be used to make conclusions about the presence or absence of unlawful discrimination.”
The fact that concession was relegated to a footnote is not the only reason to doubt the GAO’s good faith. Education secretary Betsy DeVos is currently considering whether to withdraw the Obama administration’s controversial “Dear Colleague” letter on school discipline. That letter told schools that their federal funding can be cut off if they discipline African-American students at higher rates than white students, even if the difference is the result of the evenhanded administration of their disciplinary code. The GAO report was released to great fanfare on the same day that DeVos met with interested parties on both sides of the issue. The timing suggests GAO officials may have been all too happy to upstage DeVos.
Here’s what the GAO didn’t disclose: The major reason for the disparity is clear, and it isn’t bias. As painful as it may be to admit, African-American students, on average, misbehave more than their white counterparts. Teachers (including African-American teachers) aren’t making this up, and it isn’t doing African-American students any favors to suggest otherwise.
Just recently, the National Center for Education Statistics released a report showing that African-American students self-report being in physical fights on school property at a rate more than twice that of white students. Similarly, California’s former attorney general (and current senator) Kamala Harris reported in 2014 that African-American fifth-graders are almost five times more likely than whites to be chronically truant. In addition, as the Manhattan Institute’s Heather Mac Donald has reported, African-American male teenagers from ages 14-17 commit homicide at nearly ten times the rate of their white male counterparts. Why should anyone assume that rates of misbehavior in school would magically come out equal?
Too many of our leaders like to preen themselves, claiming that they can’t imagine why teachers would disproportionately discipline African-American students unless the reason is racial discrimination. But denying the facts doesn’t help African-American students. The primary victims of the Obama administration’s effort to federalize school-discipline policy are African-American students attending majority-minority schools who are struggling to learn amid increasing classroom disorder.
Why causes these differences in behavior? The short answer is that nobody can explain it perfectly. But common sense suggests, and reams of research show, that children from fatherless households as well as children from economically disadvantaged backgrounds are more likely to get in trouble than other students. That’s at least a large part of the explanation.
The GAO tries to cast doubt on that by arguing that even in schools in prosperous neighborhoods, African-American students are disciplined at higher rates than whites. But the fact that a school is in a relatively prosperous locality doesn’t mean that the African-American students attending it are as well-off as their fellow students.
There is no clear correlation whatsoever between gun ownership rate and gun homicide rate. Not within the USA. Not regionally. Not internationally. Not among peaceful societies. Not among violent ones. Gun ownership doesn’t make us safer. It doesn’t make us less safe. The correlation simply isn’t there. It is blatantly not-there. It is so tremendously not-there that the “not-there-ness” of it alone should be a huge news story.
And anyone with access to the internet and a basic knowledge of Microsoft Excel can check for themselves. Here’s how you do it.
First, go to the Wikipedia page on firearm death rates in the United States. If you don’t like referencing Wikipedia, then instead go to this study from the journal Injury Prevention, which was widely sourced by media on both the left and right after it came out, based on a survey of 4000 respondents. Then go to this table published by the FBI, detailing overall homicide rates, as well as gun homicide rates, by state. Copy and paste the data into Excel, and plot one versus the other on a scatter diagram. Alternately, do the whole thing on the back of a napkin. It’s not hard. Here’s what you get:
This looks less like data and more like someone shot a piece of graph paper with #8 birdshot.
If the data were correlated, we should be able to develop a best fit relationship to some mathematical trend function, and calculate an “R^2 Value,” which is a mathematical way of describing how well a trendline predicts a set of data. R^2 Values vary between 0 and 1, with 1 being a perfect fit to the data, and 0 being no fit. The R^2 Value for the linear trendline on this plot is 0.0031. Total garbage. No other function fits it either.
I embellished a little with the plot, coloring the data points to correspond with whether a state is “red,” “blue,” or “swing,” according to the Romney-Obama era in which political demarcations were a little more even and a little more sensical. That should give the reader a vague sense of what the gun laws in each state are like. As you can see, there is not only no correlation whatsoever with gun ownership rate, there’s also no correlation whatsoever with state level politics.
But hey, we are a relatively unique situation on the planet, given our high ownership rates and high number of guns owned per capita, so surely there’s some supporting data linking gun ownership with gun homicide elsewhere, right?
So off we go to Wikipedia again, to their page listing countries by firearm related death rates. If Wikipedia gives you the willies, you’re going to have a harder time compiling this table on your own, because every line in it is linked to a different source. Many of them, however, come from http://www.gunpolicy.org. Their research is supported by UNSCAR, the UN Trust Facility Supporting Cooperation on Arms Regulation, so it is probably pretty reasonable data. They unfortunately do not have gun ownership rates, but do have “guns owned per 100 inhabitants,” which is a similar set we can compare against. And we drop that into Excel, or use the back of our napkin again, and now we are surely going to see how gun ownership drives gun homicide.
Well that’s disappointing.
Remember we are looking for an R^2 value close to 1, or hopefully at least up around 0.7. The value on this one is 0.0107. Garbage.
So let’s briefly recap. Gun Murder Rate is not correlated with firearm ownership rate in the United States, on a state by state basis. Firearm Homicide Rate is not correlated with guns per capita globally. It’s not correlated with guns per capita among peaceful countries, nor among violent countries, nor among European countries. So what in the heck is going on in the media, where we are constantly berated with signaling indicating that “more guns = more murder?”
One: They’re sneaking suicide in with the data, and then obfuscating that inclusion with rhetoric.
This is the biggest trick I see in the media, and very few people seem to pick up on it. Suicide, numerically speaking, is around twice the problem homicide is, both in overall rate and in rate by gun. Two thirds of gun deaths are suicides in the USA. And suicide rates are correlated with gun ownership rates in the USA, because suicide is much easier, and much more final, when done with a gun. If you’re going to kill yourself anyway, and you happen to have a gun in the house, then you choose that method out of convenience. Beyond that, there’s some correlation between overall suicide and gun ownership, owing to the fact that a failed suicide doesn’t show up as a suicide in the numbers, and suicides with guns rarely fail.
Two: They’re cooking the homicide data.
The most comprehensive example of this is probably this study from the American Journal of Public Health. It’s widely cited, and was very comprehensive in its analytical approach, and was built by people I admire and whom I admit are smarter than me. But to understand how they ended up with their conclusions, and whether those conclusions actually mean what the pundits say they mean, we have to look at what they actually did and what they actually concluded.
First off, they didn’t use actual gun ownership rates. They used fractional suicide-by-gun rates as a proxy for gun ownership. This is apparently a very common technique by gun policy researchers, but the results of that analysis ended up being very different from the ownership data in the Injury Prevention journal in my first graph of the article. The AJPH study had Hawaii at 25.8% gun ownership rate, compared to 45% in IP, and had Mississippi at 76.8% gun ownership rate, compared to 42.8% in IP. Could it be that suicidal people in Hawaii prefer different suicide methods than in Mississippi, and that might impact their proxy? I don’t know, but it would seem to me that the very use of a proxy at all puts the study on a very sketchy foundation. If we can’t know the ownership rate directly, then how can we check that the ratio of gun suicides properly maps over to the ownership rate? Further, the fact that the rates are so different in the two studies makes me curious about the sample size and sampling methods of the IP study. We can be absolutely certain that at least one of these studies, if not both of them, are wrong on the ownership rate data set. We know this purely because the data sets differ. They can’t both be right. They might both be wrong.
In the second article, we unpack “gun death” statistics and look carefully at suicide.
In the third article, we debunk the “gun homicide epidemic” myth.
In the fourth article, we expand upon why there is no link between gun ownership and gun homicide rate, and why gun buybacks and other gun ownership reduction strategies cannot work.
In the fifth article, we discuss why everyone should basically just ignore school shootings.
When I was in college, I happened across an article listing taboo topics in psychological research. These were “third rail” topics, that would put anyone investigating them in deep yogurt. One of those topics was “Race and IQ”.
It’s still a “third rail”.
In April of 2017, I published a podcast with Charles Murray, coauthor of the controversial (and endlessly misrepresented) book The Bell Curve. These are the most provocative claims in the book:
- Human “general intelligence” is a scientifically valid concept.
- IQ tests do a pretty good job of measuring it.
- A person’s IQ is highly predictive of his/her success in life.
- Mean IQ differs across populations (blacks < whites < Asians).
- It isn’t known to what degree differences in IQ are genetically determined, but it seems safe to say that genes play a role (and also safe to say that environment does too).
At the time Murray wrote The Bell Curve, these claims were not scientifically controversial—though taken together, they proved devastating to his reputation among nonscientists. That remains the case today. When I spoke with Murray last year, he had just been de-platformed at Middlebury College, a quarter century after his book was first published, and his host had been physically assaulted while leaving the hall. So I decided to invite him on my podcast to discuss the episode, along with the mischaracterizations of his research that gave rise to it.
Needless to say, I knew that having a friendly conversation with Murray might draw some fire my way. But that was, in part, the point. Given the viciousness with which he continues to be scapegoated—and, indeed, my own careful avoidance of him up to that moment—I felt a moral imperative to provide him some cover.
In the aftermath of our conversation, many people have sought to paint me as a racist—but few have tried quite so hard as Ezra Klein, Editor-at-Large of Vox. In response to my podcast, Klein published a disingenuous hit piece that pretended to represent the scientific consensus on human intelligence while vilifying me as, at best, Murray’s dupe. More likely, readers unfamiliar with my work came away believing that I’m a racist pseudoscientist in my own right.
Islamophobia is starting to sound like shark-phobia.
The GAO report ignores the critical question regarding disciplinary disparities: do black students in fact misbehave more than white students? The report simply assumes, without argument, that black students and white students act identically in class and proceeds to document their different rates of discipline. This assumption of equivalent school behavior is patently unjustified. According to federal data, black male teenagers between the ages of 14 and 17 commit homicide at nearly 10 times the rate of white male teenagers of the same age (the category “white” in this homicide data includes most Hispanics; if Hispanics were removed from the white category, the homicide disparity between blacks and whites would be much higher). That higher black homicide rate indicates a failure of socialization; teen murderers of any race lack impulse control and anger-management skills. Lesser types of juvenile crime also show large racial disparities. It is fanciful to think that the lack of socialization that produces such elevated rates of criminal violence would not also affect classroom behavior. While the number of black teens committing murder is relatively small compared with their numbers at large, a very high percentage of black children—71 percent—come from the stressed-out, single-parent homes that result in elevated rates of crime.
Understanding the Men’s Rights Movement hints that feminism’s war on men is a class-based issue tied to the relative safety of a middle-class man’s life.
A number of memes posted in social media ask why Christians are willing to support a man of Trump’s character.
Prager correctly wrote, “If a president is also a moral model, that is a wonderful bonus. But that is not part of a president’s job description.” Yet an immoral president can negatively affect the morals of a nation, not to mention negatively impact his own presidency.
So, while I concur with many of the points made by my rightly esteemed colleague, I do so with caveats.
Paul Cassell co-authored an article showing how the reduction in “stop and frisk” activity in Chicago corresponded with a significant increase in homicides. Needless to say, this article provoked comment. It seems some people are uncomfortable with the notion that it might have actually worked, even if minorities were being targeted.
This addresses comments by John Pfaff and the ACLU.
The Volokh Conspiracy by Paul Cassell
On Monday, I discussed Professor Fowles and my article about what caused the 2016 Chicago homicide spike. Our paper argued that the causal mechanism was likely an ACLU consent decree with the Chicago Police Department, which led to a sharp decline in stop and frisks—and, we believe, a consequent sharp increase in homicides (and other shooting crimes). Since our paper was announced in The Chicago Tribune, distinguished law professor John Pfaff has tweeted a series of comments about our article, and the ACLU has commented as well. I wanted to briefly respond.
Turning first to Professor Pfaff’s tweets, it is useful to start with several points of agreement. Professor Pfaff notes that the causal mechanism we propose—an ACLU agreement leads to fewer stops, fewer stops leads to more crime—is “wholly plausible.” So far, so good.
But then Pfaff moves on to criticize us because our model “has only a handful of variables, almost all of them official criminal justice statistics, no social-economic statistics, and all at the city level (despite the intense concentration of violence in Chicago).” Let’s address these concerns specifically.
First, as to the explanatory variables in our equations: In our most extensive model, we employ twenty variables—specifically stop and frisks (of course); temperature (since crime tends to spike in warm weather months); 911 calls (as a measure of police-citizen cooperation); homicides in Illinois excluding Chicago (as a measure of trends in Illinois); arrests for property crimes, violent crimes, homicides, gun crimes, shooting crimes, and drug crimes; homicides in St. Louis, Columbus, Louisville, Indianapolis, Grand Rapids, Gary, Cincinnati, Cleveland, and Detroit; and a time trend variable. All of these variables were based on monthly data, since we were attempting to explain homicide data reported on monthly basis. Interestingly, Professor Pfaff does not suggest any other readily-available monthly data that we could have included. Nor is it clear what sort of “socio-economic” statistics would have been relevant to explaining the homicide spike, which developed over a short period of time. It is true that our variables are not collected at the neighborhood level, but the city-wide level. But since our goal was to explain the Chicago homicide spike, there is nothing intrinsically wrong with looking at Chicago data.
The one specific variable that Professor Pfaff argues we failed to include was the “defunding of Cure Violence [a violence prevention program], which happened at the same time” as the spike. But it is curious that Professor Pfaff would take us to task for failing to look at this issue when, at the same time, he argues that the “best analysis” of the homicide spike was done by the University of Chicago Urban Lab. That (ultimately inconclusive) report specifically stated that “earlier in 2015, state funding for Cure Violence, a violence prevention organization operating in Chicago, was suspended, although the timing of that funding reduction does not seem to fit well as a candidate explanation for the increase in gun violence since the latter occurred at the end of 2015.”
Professor Pfaff also mentions that our regression equations simply include (in one model) homicides rates in other cities, without developing difference-in-difference variables or synthetic controls. But there are advantages to parsimonious construction. We doubt whether such controls would have made any difference to our conclusions. Moreover, we relied on Bayesian Model Averaging (BMA) as, at least, a partial response to such concerns. We would be interested to learn what Pfaff thinks of our BMA findings—which compellingly demonstrate our findings’ robustness within the included variables.
Professor Pfaff also raises a question about whether we have measured an “ACLU effect” or a “stop and frisk” effect. It is true, of course, that our regression equations explain homicides (and shooting crimes) by using stop and frisk as an explanatory variable. A linkage between stop-and-frisk tactics and homicides is an important finding in and of itself—a finding with which we hope Professor Pfaff might, to some degree, agree. But the logical next question is why did stop and frisks fall in Chicago at the end of 2015? This question is not as well suited to quantitative analysis as other questions, since it appears to be policy-driven. In any event, as Professor Pfaff even-handedly notes, we provide a qualitative defense of our position that the ACLU agreement caused the reduction in stop and frisks. Among other things, this is what the ACLU itself said—at least before the reduction became controversial.
Professor Pfaff also wonders why we do not attempt to quantify the costs of aggressive policing. Our paper explicitly addressed this point, agreeing that proactive policing has costs. But as anyone who has read the stop and frisk literature is well aware, many previous articles have articulated those costs. Our (perhaps already too-lengthy) paper focused on the other side of the cost-benefit equation, hoping to spark a discussion about how to strike a balance among competing concerns.
This issue of balancing competing concerns leads Professor Pfaff to raise a cautionary note about whether our findings are simply, as he puts it, a “Constitutional Effect” rather than an “ACLU Effect.” If things were so starkly simple as saying that all the additional stop and frisks that CPD conducted in 2015 compared to 2016 were unconstitutional, Pfaff might have an argument. But, again, our paper was more limited. The ACLU has justified its efforts to reduce stop and frisks, in part, by making the policy argument that there is “no discernible link between the rate of invasive street stops and searches by police and the level of violence . . . There simply is not any evidence of this so-called [ACLU] effect.” We believe it is fair to respond specifically to ACLU’s claim as part of what must necessarily be a much broader discussion about what are “unreasonable searches and seizures.”
We are encouraged by the fact that Professor Pfaff, based in New York City, is concerned about a common argument advanced about the efficacy of stop and frisk in fighting gun violence—that New York’s experience proves that no such linkage exists. We explained at length in our paper differences between New York and Chicago:
In 2016, New York’s homicide rate was only 3.9 per 100,000 population, while Chicago’s was 27.8—a rate more than 600% higher. But the relevant differences between the two cities may be even higher than this already staggering difference suggests. Looking at homicides committed by firearms, in 2016 New York’s rate was 2.3 compared to Chicago’s rate of 25.1—a rate more than 1000% higher. This is important because, as discussed earlier, gun crimes may be particularly sensitive to stop and frisk policies. In addition, because New York has such a small number of guns and gun crimes (relative to Chicago and many other cities), it can concentrate resources on preventing gun crimes in a way that other cities cannot….
Another problem in equating New York’s circumstances with Chicago’s is that the level of police power is different. Famously, New York has high levels of law enforcement. . . New York had about 153 law enforcement employees for every homicide committed in the city, while Chicago had only about 17 employees for every homicide committed—about a 800% difference. The difference is even greater if one combines both the gun homicide and police force numbers. Per gun homicide, New York has roughly 260 employees, while Chicago has only 19—well over a 1000% difference. To this point it might be objected that a homicide is a homicide, so it makes no sense to break out gun homicides separately. But homicides are not all alike. To the contrary, in general, homicides committed by firearms are more difficult to solve than other kinds of homicides, only adding to the relative difficulties for the Chicago Police Department. Moreover, in 2016, about 23% of New York’s homicides were gang-related, while roughly 67% (or more) of Chicago’s homicides and shootings appear to have been gang-related. Here again, gang-related homicides may be more difficult to solve than are other homicides, particularly in Chicago.
Professor Pfaff notes that our arguments distinguishing Chicago from New York “deserve attention.”
In several concluding tweets, Professor Pfaff wonders about whether homicides “spiked” in Chicago? Or did they rise steadily? Here we have a section of our paper that quantitatively analyzes this point in detail. After seasonally adjusting the data, we are able to perform a standard structural break analysis on our four dependent variables: homicides, fatal shootings, non-fatal shootings, and total shootings. We are able to find structural breaks in all four data series in and around November 2015.
In responding to each of Professor Pfaff’s questions to us, it may be fair to pose a single question back to him. Based on our review of on-the-street reports from Chicago, regression analysis of the available data, qualitative analysis of possible “omitted variables,” and relevant criminology literature, we believe that the best explanation for the 2016 Chicago homicide spike a was reduction in stop and frisks triggered by the ACLU consent decree. If this isn’t the best explanation, is there a better one?
The ACLU of Illinois has also commented on our paper. Some of the arguments that the ACLU raises are surprising, because the ACLU does not acknowledge that we addressed them at length in our paper.