More ADL Pieces

From HotAir.com

About Those Misleading ADL Statistics On Anti-Semitism (And Right-Wing Violence)

JOHN SEXTON
October 31, 2018

“According to the ADL, the number of anti-Semitic attacks has jumped by nearly 60% in the first year that Donald Trump was in office.” But that’s not any more accurate than her claim about ISIS, as Robby Soave at Reason pointed out yesterday.

The ADL statistic captures anti-Semitic “incidents,” which is a much broader category of behavior than “hate crimes” or “attacks.” Incidents include things like bullying in schools—which is bad, but usually not indicative of criminal conduct…

The ADL report came up with three subcategories of anti-Semitic incidents: vandalism, harassment, and assault. An increase in vandalism accounts for much of the overall increase, but Bernstein doubts that all of the included incidents were actually examples of anti-Semitism. The harassment category also saw an increase, largely due to a series of bomb threats against Jewish institutions in the U.S. made by a disturbed Israeli teen. It’s not at all clear that these threats were motivated by anti-Semitism.

Finally, the assault category saw a 47 percent decrease.

Soave also refers to this article at the Volokh Conspiracyby David Bernstein which suggests the ADL report is intentionally misleading, at least with regard to the bomb threats against Jewish Centers (something I wrote about extensively when it was happening):

There are several problems with relying on this study for Trump-bashing, however. The first is that the study includes 193 incidents of bomb threats to Jewish institutions as anti-Semitic incidents, even though by the time the ADL published the study, it had been conclusively shown that the two perpetrators of the bomb threats were not motivated by anti-Semitism. One can only guess why the ADL chose to inflate its statistics in this way, but none of the explanations speak well of it…

 

….

I could wrap this up here but I’d like to point out that the ADL also publishes an annual report titled “Murder and Extremism in the United States in 20xx.” In 2016, the ADL published this striking claim which got quoted quite a few times by people on the left: “Over the past 10 years (2007-2016), domestic extremists of all kinds have killed at least 372 people in the United States. Of those deaths, approximately 74% were at the hands of right-wing extremists, about 24% of the victims were killed by domestic Islamic extremists, and the remainder were killed by left-wing extremists.”

Last year I asked ADL if they could provide the information to back up that claim because the actual data is not available on their website and wasn’t included in the 2016 report itself. Initially, they responded and agreed they would pull together some information for me. But it never arrived. I sent 2 or 3 follow-up emails over a period of months and they never responded to those at all.

….

I guess it’s fair to say white supremacists are doubly dangerous to their immediate family, but I don’t think that’s what most people have in mind when they skim a report titled “Murder and Extremism in the United States in 20xx.”

Similar to what it is doing with its handling of anti-Semitic incidents, the ADL appears to be padding the numbers. In the case of the extremism reports, the ADL never hid the fact it was including these non-ideological murders, but I suspect most people reading a quote second hand, like the one I started this with above, aren’t fully aware what is included in the bottom line.

30

Populism is Dangerous: Taco Bell Voted Best Mexican Restaurant in the Country

Source: Populism is Dangerous: Taco Bell Voted Best Mexican Restaurant in the Country

 

How does Taco Bell come out on top when there are so many restaurants that are a lot better? Taco Bell is known nationwide. The really good Mexican restaurants are likely to be single establishments, or very small chains. If, say, 100,000 people in Southern California consider, say, El Coyote the best Mexican restaurant ever, well…

There are some 3000 counties in the US. If 100 people in each county answer “Taco Bell” in a survey, that’s 300,000 votes.

After Trump was elected, there was an entire movement to abolish the Electoral College for no other reason than Trump won and Hillary did not, popular vote, yada yada yada. Thank the good Lord we are not a pure democracy.

The Electoral College was designed to protect the country from populist uprisings and democratic mob rule. Simply because historically, democracies tend to disintegrate into chaos before destroying themselves.

There are many reasons why the Electoral College is amazing, wonderful, and should never be abolished on a political whim. Think pieces, original intent exposes…they all make important points, but none so enlightening as this — the same people that vote for president also voted Taco Bell the best Mexican restaurant in the country.

So, if ever you wondered, THIS, THIS IS WHY WE HAVE THE ELECTORAL COLLEGE.

Raping Statistics

Source: Lies, Damned Lies, And Campus Sexual Assault Statistics | Ashe Schow

A potential draft of new federal campus sexual assault policies was leaked this week, so expect a new round of false and misleading statistics to be shared by those who claim due process “protects rapists” and “hurts victims.”

Rape and sexual assault are serious offenses, and shouldn’t be watered down to create a narrative that America is somehow the rape capital of the world, nor should we pretend that non-offenses are offenses. That hurts real victims.

I’ve taken down every one of these statistics before — sometimes many, many times — but it’s time to debunk them all in one place. So here we go.

1-in-5 (or 1-in-4 or 1-in-3) Women Will Be Sexually Assaulted During College

Studies purporting to find such an astronomical amount of sexual violence on college campuses (numbers thousands of times higher than war-torn Congo or Detroit, America’s most dangerous city) suffer from many of the same flaws. They are often not nationally representative, are produced by women’s organizations determined to find women as oppressed victims in America, and are self-reported — a notoriously unreliable form of data.

[snip]

The Majority Of Campus Rapes Are Committed By A Small Number Of Men

Sometimes known as the “serial predator” study, this one from David Lisak has been around for decades and was debunked just a few years ago. It claims that “90%” of rapes on campus are perpetrated by a few men.

For starters, Lisak didn’t conduct the study himself but used data from studies conducted by his former grad students, who didn’t limit their data to college students. As in the 1-in-5 stat above, this one was also not nationally representative, as the surveys were conducted near a commuter college with participants who didn’t live on campus and may not have even been students.

The surveys were anonymous, yet Lisak has claimed he conducted follow-up interviews with men who admitted to committing multiple rapes (one questions whether such admissions would be so freely given to a stranger in the first place). Lisak did conduct 12 interviews during his dissertation research three decades ago, but he then combined those cherry-picked interviews into a single character — called “Frank” — which he used to tell school administrators how dangerous their campuses were. No such monster as Frank actually exists, nor is he a common problem across the country.

False Accusations Are Rare

The truth is, we don’t know how many accusations are truly false, and even if we did, one can’t walk into an investigation assuming they already know the answer.

We’re often told that “just” 2% to 10% of rape accusations are false. College administrators are told this when “trained” on how to handle accusations of sexual assault. The implication is clear: Women just don’t lie about rape, so nine times out of ten, you’d be safe in assuming the accused is guilty.

But that statistic is wildly misleading, as it only applies to accusations made to police that are proven false. Proving a negative is often impossible, especially in a “we had sex but it was consensual” situation. On college campuses, there is no punishment for a false accusation and thus no fear, as there is with lying to the police.

Further, the proven false statistic is one category of sexual assault classifications. The other categories do not all equate to “true,” so implying that 90% to 98% are true is downright false and prejudicial. Other categories include “baseless,” wrongly reported as sexual assault, cases without enough evidence for an arrest, cases with enough evidence but for some reason outside police control an arrest is not made, and cases where there is enough evidence for an arrest. Of the cases that lead to an arrest, a small percentage actually go to trial and result in a “guilty” finding.

Using the same logic as the peddlers of this statistic, one would only be able to say that 3% to 5% of rape accusations are true, since that’s how many return a “guilty” finding.

It’s Bad That 91% Of Colleges And Universities Said They Received No Rape Reports

I include this one because while one would think it would be a good thing that reports of sexual assault aren’t rampant on college campuses, the “scholars” at the American Association of University Women think it’s a bad thing. Because they’ve thoroughly bought into the debunked statistics above, no reports must mean that schools are somehow discouraging victims from coming forward or are sweeping reports under the rug. It’s hard to believe either of these is the case when the media, lawmakers, federal institutions, and Hollywood are constantly claiming huge swaths of the female population are sexually assaulted on college campuses and begging people to come forward.

1-in-3 Men Would Rape If They Could Get Away With It

This statistic was quickly debunked as soon as it appeared in 2015. A woman who admitted to me at the time that she was seeking grant money (a good motive for finding alarming statistics in one’s survey) claimed her study found that a whopping one-third of surveyed men had “intentions to force a woman to sexual intercourse.”

Wow, right? Except, as I’ve pointed out with previous misleading statistics, this one suffers from many of the same flaws. It’s not nationally representative, and the answers of just 73 men were used to arrive at the 1-in-3 number blasted out by the media and women’s groups. Of those 73 men, 23 were found to have those intentions, based on the researchers own definition of what constituted bad intentions. Just nine guys said they would actually rape a woman. Nine guys do not an epidemic make.

These guys may not have been taking the survey seriously or they were answering a question from Plato’s Republic: How many people would commit a crime if they knew they wouldn’t be caught? One would believe many people would answer affirmatively to such questions about various laws, but that doesn’t mean they’d actually commit them. One can never know if they will get away with it.

Who Would Google Vote For?

Study: Google bias in search results; 40% lean left or liberal

Source

[snip]

In order to assess how fairly search engine results portray political candidates and controversial issues, we collected over 1,200 URLs ranking highly in Google.com for politically-charged keywords such as “gun control”, “abortion”, “TPP”, and “Black Lives Matter”. Each URL was then assessed for political slant by politically active individuals from both the left and right. Finally, we used CanIRank’s SEO software to analyze how each URL compared in dozens of different ranking factors to determine whether Google’s algorithm treated websites similarly regardless of their political slant.

Among our key findings were that top search results were almost 40% more likely to contain pages with a “Left” or “Far Left” slant than they were pages from the right. Moreover, 16% of political keywords contained no right-leaning pages at all within the first page of results.

Our analysis of the algorithmic metrics underpinning those rankings suggests that factors within the Google algorithm itself may make it easier for sites with a left-leaning or centrist viewpoint to rank higher in Google search results compared to sites with a politically conservative viewpoint.

[snip]

In our sample set of over 2,000 search results, we found that searchers are 39% more likely to be presented information with a left-leaning bias than they are information from the right.

But for some keywords, the search results are even more egregious. Does it make sense, for example, that someone researching “Republican platform” should be presented only the official text of the platform and seven left-leaning results highly critical of that platform, with zero results supporting it?

For other controversial keywords like “minimum wage”, “abortion,” “NAFTA”, “Iraq war”, “campaign finance reform”, “global warming”, “marijuana legalization”, and “tpp”, no right-leaning websites are to be found among the top results.

Search results for keywords like “Hillary Clinton seizures” and “Hillary Clinton sick”, on the other hand, were dominated by right-leaning websites and YouTube footage.

The proportion of results with a left-leaning bias increased for top ranking results, which typically receive the majority of clicks. For example, we found that search results denoted as demonstrating a left or far left slant received 40% more exposure in the top 3 ranking spots than search results considered to have a right or far right political slant.

[snip]

Lots of really good comments, and the author hangs around to respond.

The Narrative Trumps All

STUDY: Researchers falsely frame Trump supporters as racists

Links to the studies are in the piece.

Led by Musa al-Gharbi, a Columbia University sociologist, “On Social Research in the Age of Trump” analyzes three case studies of academic research on Trump to illustrate the various ways that academics have misrepresented the president and his voter base to the public.

One example of this phenomena can be seen in the April 2017 Washington Post article “Racism motivated Trump voters more than authoritarianism,” by Thomas Wood, who teaches political science classes at Ohio State University.

While Wood cites survey data to claim that Trump voters were especially motivated by racism, a closer analysis by al-Gharbi reveals that Wood’s arguments about Trump voters can’t be substantiated from the data cited in the article.

“According to Wood’s own data, whites who voted for Trump are perhaps less racist than those who voted for Romney,” al-Gharbi explains, adding that “not only were they less authoritarian than Romney voters, but less racist too!”

 

The Mismatch Effect

An “effect” is defined as “a phenomenon that follows and is caused by some previous phenomenon”. We have to be careful when labeling something as an “effect”, to make sure we’re pretty sure we’ve correctly assigned causation.

That being said, we have a phenomenon called the “mismatch effect”. When people are placed in a competitive environment on the basis of something other than their merits or qualifications, they find it much more difficult to keep up and wind up in the bottom rank or drop out entirely.

In the case of programs that aim to increase the number of certain groups admitted to top-rank colleges, we often find that members of these groups have lower average test scores or GPAs than the average among those who graduate from those schools.

We can imagine that of people who enroll in any school, there will be an average score and some variation around that average. It may well be that, say, two thirds of those admitted will do well and graduate. The students who fail to graduate will most likely be in the bottom third of that distribution. Now imagine a mismatched group (we’ll pick on Martians) is admitted on the basis of some sort of affirmative action program. Suppose their average test scores are one standard deviation lower than the average for the rest of the school. In this case, nearly 72% of these students will have test scores that place them below the cut-off for “likely to graduate”. This is not to say that no Martians will graduate from that school, merely that far fewer will, and at a rate that’s not at all similar to the graduation rate for non-Martians.

At the Volokh Conspiracy, Eugene Volokh offers:

The Mismatch Effect: A Danger for Students of All Races

[snip]

That’s the debate about the “mismatch effect,” which I’ve followed over the years (though from a distance); it has mostly focused on whether race-based affirmative action causes problems (such as lower black bar passage rates) as a result of this effect, but it can also be relevant to many students of all races. I was first exposed to it because of the work of my UCLA School of Law colleague Rick Sander, and Robert Steinbuch at Arkansas / Little Rock has been working in it as well; Rob has been kind enough to pass along these thoughts on the subject:

[snip]
Analysis of a large dataset containing information on graduates from the law school at which I teach, the University of Arkansas at Little Rock, Bowen School of Law, demonstrates that LSAT scores of students enrolled at the school (1) solidly predicted bar passage, and (2) varied significantly in relation to ethnicity.

Although color-blind admissions should produce roughly 25 percent of both Whites and African Americans in each LSAT-score quartile, over two-thirds of graduating African Americans were admitted with LSAT scores in the bottom quartile, as contrasted with only 16 percent for White students. (For more details, see the recent article I coauthored: Steinbuch and Love, Color-Blind-Spot: The Intersection of Freedom of Information Law and Affirmative Action in Law School Admissions, 20 Tex. Rev. L. & Pol. 1 (2016)). Although almost exactly a quarter of White students were admitted in the top quartile of LSAT scores (as expected), remarkably, only one percent of enrolled African Americans fell into the top quartile of LSAT scores. Predictably, this led to dramatic differences in bar passage: 80 percent of Whites passed the bar (the first time), while only 60 percent of African Americans did.

Given that the African-American cohort in our dataset on average had much lower LSAT scores than the bulk of the student body, it’s fair to conclude that this cohort overall was mismatched. This profile dominated because affirmative-action considerations are designed to consider factors beyond traditional credentials and explains why debates on how to deal with poor bar-passage rates often focus on race-based admissions. However, the ensuing discussion often misses that, while on average Whites will not be mismatched because they have such a large population — putting many at or above the mean of the class, the number of Whites who are mismatched could easily equal or exceed that of any other racial group.
[snip]

 

 
And also from the Volokh Conspiracy, Rick Sander offers this:

An emerging scholarly consensus on mismatch and affirmative action (ideologues not welcome)

[snip]
Williams’s paper presents equations testing dozens of different combinations of models and outcomes. With impressive consistency, his analysis shows very powerful evidence for law school mismatch, especially for first-time takers. His results are all the more compelling because, as Arcidiacono and Lovenheim point out, the weaknesses of the BPS data bias all analyses against a finding of mismatch. Williams concludes his piece, too, with a plea for the release of better data.

Meanwhile, not a single one of the law school mismatch critics has managed to publish their results in a peer-reviewed journal, though at least some of them have tried. As I will discuss in another post, many of these critics still shrilly hold to their earlier views. But it should be clear now to any reasonable observer that mismatch is a serious issue that the legal academy needs to address.

The above references two survey-scale papers, both taking great effort to eliminate ideological bias. Links are in the cited piece.

Illegal Immigrants and Crime

From Just Facts Daily, we get:

Illegal Immigrants Are Far More Likely to Commit Serious Crimes Than the U.S. Public

… the Associated Press published a “fact check“ claiming that illegal immigrants are more law-abiding than the general public. Various media outlets, such as the New York Times, Yahoo!, and a number of NBC affiliates published this article. The Washington Post ran a similar story, and other media outlets and so-called fact checkers have made similar claims in the past.

The truth, however, is that comprehensive, straightforward facts from primary sources—namely the Obama administration Census Bureau and Department of Justice—prove that illegal immigrants are far more likely to commit serious crimes than the U.S. population. Studies that claim otherwise typically suffer from fallacies condemned by academic publications about how to accurately analyze data.

The Most Concrete Facts

Data on illegal immigration and crime is often clouded, precisely because these are unlawful activities where perpetrators seek to hide their actions. Also, governments sometimes fail to record or release information that could be or has been obtained. The Obama administration, in particular, refused to release the names of convicted immigrant sex offenders and hid other details about crimes committed by immigrants.

Nonetheless, a combination of three material facts sheds enough light on this issue to draw some firm conclusions.

First, U.S. Census data from 2011 to 2015 shows that noncitizens are 7% more likely than the U.S. population to be incarcerated in adult correctional facilities. This alone debunks the common media narrative, but it only scratches the surface of serious criminality by illegal immigrants.

Second, Department of Justice data reveals that in the decade ending in 2015, the U.S. deported at least 1.5 million noncitizens who were convicted of committing crimes in the U.S. (Table 41). This amounts to 10 times the number of noncitizens in U.S. adult correctional facilities during 2015.

Third, Department of Justice data shows that convicts released from prison have an average of 3.9 prior convictions, not including convictions that led to their imprisonment (Table 5). This means that people in prison are often repeat offenders—but as shown by the previous fact, masses of convicted criminals have been deported, making it hard for them to reoffend and end up in a U.S. prison.

In other words, even after deporting 10 times more noncitizens convicted of crimes than are in U.S. prisons and jails, they are still 7% more likely to be incarcerated than the general public. This indicates a level of criminality that is multiplicatively higher than the U.S. population.

Furthermore, roughly half of noncitizens are in the U.S. legally, and legal immigrants rarely commit crimes. This is because U.S. immigration laws are designed to keep criminals out. Thus, the vast majority of incarcerated noncitizens are doubtlessly illegal immigrants. If legal immigrants were removed from the equation, the incarceration rate of illegal immigrants would probably be about twice as high as for all noncitizens.

On the other hand, there is uncertainty about the exact number of noncitizens in the U.S., and Census figures are almost surely low. Hence, the incarceration rate of illegal immigrants is likely not twice as high as the U.S. population. Nevertheless, this is only the tip of the iceberg, because the U.S. continually deports massive numbers of illegal immigrant convicts.

 

School Discipline Racial Bias: Unfounded Charge | National Review

Source: School Discipline Racial Bias: Unfounded Charge | National Review

n April 4th headline in the New York Times was eye-catching: “Government Watchdog Finds Racial Bias in School Discipline.” Eye-catching, but highly misleading. The Government Accountability Office report, which was commissioned by congressmen Bobby Scott (D., Va.) and Jerrold Nadler (D., N.Y.), found only what we’ve known for a long time — that African-American students are disciplined at higher rates than white students. Buried in a footnote, the GAO report concedes that disparities by themselves “should not be used to make conclusions about the presence or absence of unlawful discrimination.”

The fact that concession was relegated to a footnote is not the only reason to doubt the GAO’s good faith. Education secretary Betsy DeVos is currently considering whether to withdraw the Obama administration’s controversial “Dear Colleague” letter on school discipline. That letter told schools that their federal funding can be cut off if they discipline African-American students at higher rates than white students, even if the difference is the result of the evenhanded administration of their disciplinary code. The GAO report was released to great fanfare on the same day that DeVos met with interested parties on both sides of the issue. The timing suggests GAO officials may have been all too happy to upstage DeVos.

Here’s what the GAO didn’t disclose: The major reason for the disparity is clear, and it isn’t bias. As painful as it may be to admit, African-American students, on average, misbehave more than their white counterparts. Teachers (including African-American teachers) aren’t making this up, and it isn’t doing African-American students any favors to suggest otherwise.

Just recently, the National Center for Education Statistics released a report showing that African-American students self-report being in physical fights on school property at a rate more than twice that of white students. Similarly, California’s former attorney general (and current senator) Kamala Harris reported in 2014 that African-American fifth-graders are almost five times more likely than whites to be chronically truant. In addition, as the Manhattan Institute’s Heather Mac Donald has reported, African-American male teenagers from ages 14-17 commit homicide at nearly ten times the rate of their white male counterparts. Why should anyone assume that rates of misbehavior in school would magically come out equal?

Too many of our leaders like to preen themselves, claiming that they can’t imagine why teachers would disproportionately discipline African-American students unless the reason is racial discrimination. But denying the facts doesn’t help African-American students. The primary victims of the Obama administration’s effort to federalize school-discipline policy are African-American students attending majority-minority schools who are struggling to learn amid increasing classroom disorder.

Why causes these differences in behavior? The short answer is that nobody can explain it perfectly. But common sense suggests, and reams of research show, that children from fatherless households as well as children from economically disadvantaged backgrounds are more likely to get in trouble than other students. That’s at least a large part of the explanation.

The GAO tries to cast doubt on that by arguing that even in schools in prosperous neighborhoods, African-American students are disciplined at higher rates than whites. But the fact that a school is in a relatively prosperous locality doesn’t mean that the African-American students attending it are as well-off as their fellow students.

Everybody’s Lying About the Link Between Gun Ownership and Homicide

Source: Everybody’s Lying About the Link Between Gun Ownership and Homicide

There is no clear correlation whatsoever between gun ownership rate and gun homicide rate. Not within the USA. Not regionally. Not internationally. Not among peaceful societies. Not among violent ones. Gun ownership doesn’t make us safer. It doesn’t make us less safe. The correlation simply isn’t there. It is blatantly not-there. It is so tremendously not-there that the “not-there-ness” of it alone should be a huge news story.

And anyone with access to the internet and a basic knowledge of Microsoft Excel can check for themselves. Here’s how you do it.

First, go to the Wikipedia page on firearm death rates in the United States. If you don’t like referencing Wikipedia, then instead go to this study from the journal Injury Prevention, which was widely sourced by media on both the left and right after it came out, based on a survey of 4000 respondents. Then go to this table published by the FBI, detailing overall homicide rates, as well as gun homicide rates, by state. Copy and paste the data into Excel, and plot one versus the other on a scatter diagram. Alternately, do the whole thing on the back of a napkin. It’s not hard. Here’s what you get:

This looks less like data and more like someone shot a piece of graph paper with #8 birdshot.

If the data were correlated, we should be able to develop a best fit relationship to some mathematical trend function, and calculate an “R^2 Value,” which is a mathematical way of describing how well a trendline predicts a set of data. R^2 Values vary between 0 and 1, with 1 being a perfect fit to the data, and 0 being no fit. The R^2 Value for the linear trendline on this plot is 0.0031. Total garbage. No other function fits it either.

I embellished a little with the plot, coloring the data points to correspond with whether a state is “red,” “blue,” or “swing,” according to the Romney-Obama era in which political demarcations were a little more even and a little more sensical. That should give the reader a vague sense of what the gun laws in each state are like. As you can see, there is not only no correlation whatsoever with gun ownership rate, there’s also no correlation whatsoever with state level politics.

But hey, we are a relatively unique situation on the planet, given our high ownership rates and high number of guns owned per capita, so surely there’s some supporting data linking gun ownership with gun homicide elsewhere, right?

So off we go to Wikipedia again, to their page listing countries by firearm related death rates. If Wikipedia gives you the willies, you’re going to have a harder time compiling this table on your own, because every line in it is linked to a different source. Many of them, however, come from http://www.gunpolicy.org. Their research is supported by UNSCAR, the UN Trust Facility Supporting Cooperation on Arms Regulation, so it is probably pretty reasonable data. They unfortunately do not have gun ownership rates, but do have “guns owned per 100 inhabitants,” which is a similar set we can compare against. And we drop that into Excel, or use the back of our napkin again, and now we are surely going to see how gun ownership drives gun homicide.

Well that’s disappointing.

Remember we are looking for an R^2 value close to 1, or hopefully at least up around 0.7. The value on this one is 0.0107. Garbage.

….

So let’s briefly recap. Gun Murder Rate is not correlated with firearm ownership rate in the United States, on a state by state basis. Firearm Homicide Rate is not correlated with guns per capita globally. It’s not correlated with guns per capita among peaceful countries, nor among violent countries, nor among European countries. So what in the heck is going on in the media, where we are constantly berated with signaling indicating that “more guns = more murder?”

One: They’re sneaking suicide in with the data, and then obfuscating that inclusion with rhetoric.
This is the biggest trick I see in the media, and very few people seem to pick up on it. Suicide, numerically speaking, is around twice the problem homicide is, both in overall rate and in rate by gun. Two thirds of gun deaths are suicides in the USA. And suicide rates are correlated with gun ownership rates in the USA, because suicide is much easier, and much more final, when done with a gun. If you’re going to kill yourself anyway, and you happen to have a gun in the house, then you choose that method out of convenience. Beyond that, there’s some correlation between overall suicide and gun ownership, owing to the fact that a failed suicide doesn’t show up as a suicide in the numbers, and suicides with guns rarely fail.

….

Two: They’re cooking the homicide data.
The most comprehensive example of this is probably this study from the American Journal of Public Health. It’s widely cited, and was very comprehensive in its analytical approach, and was built by people I admire and whom I admit are smarter than me. But to understand how they ended up with their conclusions, and whether those conclusions actually mean what the pundits say they mean, we have to look at what they actually did and what they actually concluded.

First off, they didn’t use actual gun ownership rates. They used fractional suicide-by-gun rates as a proxy for gun ownership. This is apparently a very common technique by gun policy researchers, but the results of that analysis ended up being very different from the ownership data in the Injury Prevention journal in my first graph of the article. The AJPH study had Hawaii at 25.8% gun ownership rate, compared to 45% in IP, and had Mississippi at 76.8% gun ownership rate, compared to 42.8% in IP. Could it be that suicidal people in Hawaii prefer different suicide methods than in Mississippi, and that might impact their proxy? I don’t know, but it would seem to me that the very use of a proxy at all puts the study on a very sketchy foundation. If we can’t know the ownership rate directly, then how can we check that the ratio of gun suicides properly maps over to the ownership rate? Further, the fact that the rates are so different in the two studies makes me curious about the sample size and sampling methods of the IP study. We can be absolutely certain that at least one of these studies, if not both of them, are wrong on the ownership rate data set. We know this purely because the data sets differ. They can’t both be right. They might both be wrong.

 

Series roundup:

In the second article, we unpack “gun death” statistics and look carefully at suicide.

In the third article, we debunk the “gun homicide epidemic” myth.

In the fourth article, we expand upon why there is no link between gun ownership and gun homicide rate, and why gun buybacks and other gun ownership reduction strategies cannot work.

In the fifth article, we discuss why everyone should basically just ignore school shootings.

The sixth article presents a solution free of culture wars, and the finale isn’t about guns at all.