Race and the Race for the White House: On Social Research in the Age of Trump | SpringerLink

Source: Race and the Race for the White House: On Social Research in the Age of Trump | SpringerLink

From the abstract:

This essay presents a series of case studies showing how analyses of the roles of race and racism in the 2016 U.S. Presidential Election seem to have been systematically distorted as a result. However, motivated reasoning, confirmation bias, prejudicial study design, and failure to address confounds are not limited to questions about race (a similar essay could have been done on the alleged role of sexism/ misogyny in the 2016 cycle, for instance). And while Trump does seem to generate particularly powerful antipathy from researchers – perhaps exacerbating negative tendencies – ideologically-driven errors likely permeate a good deal of social research. Presented evidence suggests that research with strong adversarial or advocacy orientations may be most susceptible to systemic distortion. Activist scholars and their causes may also be among those most adversely impacted by the resultant erosion of research reliability and credibility.

The article is behind a paywall, but Campus Watch offers commentary:

One example of this phenomena can be seen in the April 2017 Washington Post article “Racism motivated Trump voters more than authoritarianism,” by Thomas Wood, who teaches political science classes at Ohio State University.

While Wood cites survey data to claim that Trump voters were especially motivated by racism, a closer analysis by al-Gharbi reveals that Wood’s arguments about Trump voters can’t be substantiated from the data cited in the article.

“According to Wood’s own data, whites who voted for Trump are perhaps less racist than those who voted for Romney,” al-Gharbi explains, adding that “not only were they less authoritarian than Romney voters, but less racist too!”

“Unfortunately, Wood declined to consider how Trump voters differed from Romney voters…instead focusing on the gap between Democrats and Republicans in 2016, in the service of a conclusion his data do not support,” he adds.

Bees are not in Danger – Cedar Writes

Source: Bees are not in Danger – Cedar Writes

….Looking, for the moment, at honeybees in particular, we are seeing that far from being devastated by Colony Collapse Disorder, there has been an increase in their numbers. The pesticide most often blamed for bee death is neonicotinoids, which are applied to crops and taken up into the plants to kill pests when they eat the plant. Which bees do not, so you may be pardoned confusion over how bees are affected by this. The neonics are taken up into pollen, which bees do eat. However, “there is no scientific evidence to link neonicotinoids as the major cause of colony declines” even when the bees were fed 20 times the amount normally expected to be found in their usual foraging. Science has shown that, in direct opposition to what is being shown in media, low doses of pesticides and bacteria in combination can actually have a beneficial effect on bees. But the EU banned neonics… only that “legislation was at no time based on a direct link on bee mortality.” In fact, honeybees in Europe are overall healthier than they were in the past, as shown by overwintering hive survival.

And what about the wild bees? Well, there are not a lot of species that visit the crops, and none of the endangered species contribute to agricultural pollination. What does this mean? That we shouldn’t do anything about the poor endangered species of bees? No… but what it does tell me is that they are not endangered because of pesticides. They don’t visit the same places where pesticides are used. And the bees who are exposed? Can be encouraged greatly with simple conservations measures like leaving strips of wildflowers blooming in between fields.

I suspect a lot of people aren’t going to want to hear this. It may mean that all their activism has been a waste of time.

Trigger Warning: Trigger warning ahead

A new study suggests that trigger warnings may actually increase student vulnerability to offensive or troubling material.

Is it possible that “trigger warnings” — warnings to students and others that they are about to encounter potentially offensive or disturbing material — do more harm than good? A new study suggests that may be the case.

Trigger warnings may inadvertently undermine some aspects of emotional resilience. Further research is needed on the generalizability of our findings, especially to collegiate populations and to those with trauma histories.

Source

Is IQ real?

(Or does it default to integer?)*

Jordan Peterson has some comments here.

From the autogenerated transcript:

1:11 one of the things I have to tell you
01:13 about it IQ research is that if you
01:15 don’t buy IQ research you might as well
01:19 throw away all the rest of psychology
01:21 and the reason for that is that the
01:24 psychologists first of all who developed
01:26 intelligence testing were among the
01:28 early psychologists who instantiated the
01:30 statistical techniques that all
01:32 psychologists use to verify and test all
01:35 of their hypotheses so you end up
01:37 throwing the baby out with the bathwater
01:39 and the IQ people have defined
01:42 intelligence in a more stringent
01:44 stringent and accurate way than we’ve
01:47 been able to define almost any other
01:50 psychological construct and so if you
01:52 toss out the one that’s most well
01:54 defined then you’re kind of stuck with
01:55 the problem of what are you going to do
01:57 with all the ones that you have left
01:58 over that are nowhere near as
01:59 well-defined
02:00 or as well measured or as or as or or
02:04 whose predictive validity is much less
02:07 and has been demonstrated with much less
02:09 vigor and clarity

Also here:

00:01 so IQ is reliable invalid [and valid – ed] it’s more
00:05 reliable and valid than any other
00:06 psychometric test ever designed by
00:09 social scientists the IQ claims are more
00:11 psychometrically rigorous than any other
00:13 phenomena phenomenon that’s been
00:16 discovered by social scientists

Also of interest:

I
08:32 should tell you how to make an IQ test
08:33 is actually really easy and you need to
08:36 know this to actually understand what IQ
08:38 is so imagine that you generated a set
08:42 of 10,000 questions okay about anything
08:45 it could be math problems they could be
08:47 general knowledge they could be
08:49 vocabulary they could be multiple choice
08:50 it really doesn’t matter what they’re
08:52 about as long as they require abstract
08:53 to solve so they’d be formulated
08:56 linguistically but mathematically would
08:58 also apply and then you have those
09:01 10,000 questions now you take a random
09:03 set of a hundred of those questions and
09:05 you give them to a thousand people and
09:08 all you do is sum up the answers right
09:10 from so some people are gonna get most
09:12 of them right and some some of them are
09:13 going to get most of them wrong you just
09:14 rank order the people in terms of their
09:16 score correct that for age and you have
09:19 IQ that’s all there is to it and what
09:22 you’ll find is that no matter which
09:24 random set of a hundred questions you
09:26 take the people at the top of one random
09:28 set will be at the top of all the others
09:30 with very very very high consistency so
09:34 one thing you need to know is that if
09:36 any social science claims whatsoever are
09:39 correct then the IQ claims are correct
09:44 because the IQ claims are more
09:46 psychometrically rigorous than any other
09:48 phenomena phenomenon that’s been
09:51 discovered by social scientists

*  Fortran reference

When does correlation imply cause?

If correlation doesn’t imply causation, then what does?

Of course, while it’s all very well to piously state that correlation doesn’t imply causation, it does leave us with a conundrum: under what conditions, exactly, can we use experimental data to deduce a causal relationship between two or more variables?

The standard scientific answer to this question is that (with some caveats) we can infer causality from a well designed randomized controlled experiment. Unfortunately, while this answer is satisfying in principle and sometimes useful in practice, it’s often impractical or impossible to do a randomized controlled experiment. And so we’re left with the question of whether there are other procedures we can use to infer causality from experimental data. And, given that we can find more general procedures for inferring causal relationships, what does causality mean, anyway, for how we reason about a system?

It might seem that the answers to such fundamental questions would have been settled long ago. In fact, they turn out to be surprisingly subtle questions. Over the past few decades, a group of scientists have developed a theory of causal inference intended to address these and other related questions. This theory can be thought of as an algebra or language for reasoning about cause and effect. Many elements of the theory have been laid out in a famous book by one of the main contributors to the theory, Judea Pearl. Although the theory of causal inference is not yet fully formed, and is still undergoing development, what has already been accomplished is interesting and worth understanding.

In this post I will describe one small but important part of the theory of causal inference, a causal calculus developed by Pearl. This causal calculus is a set of three simple but powerful algebraic rules which can be used to make inferences about causal relationships. In particular, I’ll explain how the causal calculus can sometimes (but not always!) be used to infer causation from a set of data, even when a randomized controlled experiment is not possible. Also in the post, I’ll describe some of the limits of the causal calculus, and some of my own speculations and questions.

The post is a little technically detailed at points. However, the first three sections of the post are non-technical, and I hope will be of broad interest. Throughout the post I’ve included occasional “Problems for the author”, where I describe problems I’d like to solve, or things I’d like to understand better. Feel free to ignore these if you find them distracting, but I hope they’ll give you some sense of what I find interesting about the subject. Incidentally, I’m sure many of these problems have already been solved by others; I’m not claiming that these are all open research problems, although perhaps some are. They’re simply things I’d like to understand better. Also in the post I’ve included some exercises for the reader, and some slightly harder problems for the reader. You may find it informative to work through these exercises and problems.

Before diving in, one final caveat: I am not an expert on causal inference, nor on statistics. The reason I wrote this post was to help me internalize the ideas of the causal calculus. Occasionally, one finds a presentation of a technical subject which is beautifully clear and illuminating, a presentation where the author has seen right through the subject, and is able to convey that crystalized understanding to others. That’s a great aspirational goal, but I don’t yet have that understanding of causal inference, and these notes don’t meet that standard. Nonetheless, I hope others will find my notes useful, and that experts will speak up to correct any errors or misapprehensions on my part.

Simpson’s paradox
Let me start by explaining two example problems to illustrate some of the difficulties we run into when making inferences about causality. The first is known as Simpson’s paradox. To explain Simpson’s paradox I’ll use a concrete example based on the passage of the Civil Rights Act in the United States in 1964.

In the US House of Representatives, 61 percent of Democrats voted for the Civil Rights Act, while a much higher percentage, 80 percent, of Republicans voted for the Act. You might think that we could conclude from this that being Republican, rather than Democrat, was an important factor in causing someone to vote for the Civil Rights Act. However, the picture changes if we include an additional factor in the analysis, namely, whether a legislator came from a Northern or Southern state. If we include that extra factor, the situation completely reverses, in both the North and the South. Here’s how it breaks down:

North: Democrat (94 percent), Republican (85 percent)

South: Democrat (7 percent), Republican (0 percent)

Yes, you read that right: in both the North and the South, a larger fraction of Democrats than Republicans voted for the Act, despite the fact that overall a larger fraction of Republicans than Democrats voted for the Act.

You might wonder how this can possibly be true. I’ll quickly state the raw voting numbers, so you can check that the arithmetic works out, and then I’ll explain why it’s true. You can skip the numbers if you trust my arithmetic.

North: Democrat (145/154, 94 percent), Republican (138/162, 85 percent)

South: Democrat (7/94, 7 percent), Republican (0/10, 0 percent)

Overall: Democrat (152/248, 61 percent), Republican (138/172, 80 percent)

One way of understanding what’s going on is to note that a far greater proportion of Democrat (as opposed to Republican) legislators were from the South. In fact, at the time the House had 94 Democrats, and only 10 Republicans. Because of this enormous difference, the very low fraction (7 percent) of southern Democrats voting for the Act dragged down the Democrats’ overall percentage much more than did the even lower fraction (0 percent) of southern Republicans who voted for the Act.

(The numbers above are for the House of Congress. The numbers were different in the Senate, but the same overall phenomenon occurred. I’ve taken the numbers from Wikipedia’s article about Simpson’s paradox, and there are more details there.)

If we take a naive causal point of view, this result looks like a paradox. As I said above, the overall voting pattern seems to suggest that being Republican, rather than Democrat, was an important causal factor in voting for the Civil Rights Act. Yet if we look at the individual statistics in both the North and the South, then we’d come to the exact opposite conclusion. To state the same result more abstractly, Simpson’s paradox is the fact that the correlation between two variables can actually be reversed when additional factors are considered. So two variables which appear correlated can become anticorrelated when another factor is taken into account.

You might wonder if results like those we saw in voting on the Civil Rights Act are simply an unusual fluke. But, in fact, this is not that uncommon. Wikipedia’s page on Simpson’s paradox lists many important and similar real-world examples ranging from understanding whether there is gender-bias in university admissions to which treatment works best for kidney stones. In each case, understanding the causal relationships turns out to be much more complex than one might at first think.

Causal models
To help address problems like the two example problems just discussed, Pearl introduced a causal calculus. In the remainder of this post, I will explain the rules of the causal calculus, and use them to analyse the smoking-cancer connection. We’ll see that even without doing a randomized controlled experiment it’s possible (with the aid of some reasonable assumptions) to infer what the outcome of a randomized controlled experiment would have been, using only relatively easily accessible experimental data, data that doesn’t require experimental intervention to force people to smoke or not, but which can be obtained from purely observational studies.

To state the rules of the causal calculus, we’ll need several background ideas. I’ll explain those ideas over the next three sections of this post. The ideas are causal models (covered in this section), causal conditional probabilities, and d-separation, respectively. It’s a lot to swallow, but the ideas are powerful, and worth taking the time to understand. With these notions under our belts, we’ll able to understand the rules of the causal calculus

Read the blog post for the rest, with diagrams.

The Conquest of Climate – Progress and Peril

Source: The Conquest of Climate – Progress and Peril

How bad will climate change be? Not very.

No, this isn’t a denialist screed. Human greenhouse emissions will warm the planet, raise the seas and derange the weather, and the resulting heat, flood and drought will be cataclysmic.

Cataclysmic—but not apocalyptic. While the climate upheaval will be large, the consequences for human well-being will be small. Looked at in the broader context of economic development, climate change will barely slow our progress in the effort to raise living standards.

To see why, consider a 2016 Newsweek headline that announced “Climate change could cause half a million deaths in 2050 due to reduced food availability.” The story described a Lancet study, “Global and regional health effects of future food production under climate change,” [1] that made dire forecasts: by 2050 the effects of climate change on agriculture will shrink the amount of food people eat, especially fruits and vegetables, enough to cause 529,000 deaths each year from malnutrition and related diseases. The report added grim specifics to the familiar picture of a world made hot, hungry, and barren by the coming greenhouse apocalypse.

But buried beneath the gloomy headlines was a curious detail: the study also predicts that in 2050 the world will be better fed than ever before. The “reduced food availability” is only relative to a 2050 baseline when food will be more abundant than now thanks to advances in agricultural productivity that will dwarf the effects of climate change. Those advances on their own will raise per-capita food availability to 3,107 kilocalories per day; climate change could shave that to 3,008 kilocalories, but that’s still substantially higher than the benchmarked 2010 level of 2,817 kilocalories—and for a much larger global population. Per-capita fruit and vegetable consumption, the study estimated, will rise by 6.1 percent and meat consumption by 5.4 percent. The poorest countries will benefit most, with food availability rising 14 percent in Africa and Southeast Asia. Even after subtracting the 529,000 lives theoretically lost to climate change, the study estimates that improved diets will save a net 1,348,000 lives per year in 2050.

Even Jeanne Dixon got a few right

Tomorrow, Sunday, April 22, is Earth Day 2018 In the May 2000 issue of Reason Magazine, award-winning science correspondent Ronald Bailey wrote an excellent article titled “Earth Day, Then and Now” to provide some historical perspective on the 30th anniversary of Earth Day. In that article, Bailey noted that around the time of the first Earth Day […]

via 18 examples of the spectacularly wrong predictions made around the first “Earth Day” in 1970 — Watts Up With That?

Everybody’s Lying About the Link Between Gun Ownership and Homicide

Source: Everybody’s Lying About the Link Between Gun Ownership and Homicide

There is no clear correlation whatsoever between gun ownership rate and gun homicide rate. Not within the USA. Not regionally. Not internationally. Not among peaceful societies. Not among violent ones. Gun ownership doesn’t make us safer. It doesn’t make us less safe. The correlation simply isn’t there. It is blatantly not-there. It is so tremendously not-there that the “not-there-ness” of it alone should be a huge news story.

And anyone with access to the internet and a basic knowledge of Microsoft Excel can check for themselves. Here’s how you do it.

First, go to the Wikipedia page on firearm death rates in the United States. If you don’t like referencing Wikipedia, then instead go to this study from the journal Injury Prevention, which was widely sourced by media on both the left and right after it came out, based on a survey of 4000 respondents. Then go to this table published by the FBI, detailing overall homicide rates, as well as gun homicide rates, by state. Copy and paste the data into Excel, and plot one versus the other on a scatter diagram. Alternately, do the whole thing on the back of a napkin. It’s not hard. Here’s what you get:

This looks less like data and more like someone shot a piece of graph paper with #8 birdshot.

If the data were correlated, we should be able to develop a best fit relationship to some mathematical trend function, and calculate an “R^2 Value,” which is a mathematical way of describing how well a trendline predicts a set of data. R^2 Values vary between 0 and 1, with 1 being a perfect fit to the data, and 0 being no fit. The R^2 Value for the linear trendline on this plot is 0.0031. Total garbage. No other function fits it either.

I embellished a little with the plot, coloring the data points to correspond with whether a state is “red,” “blue,” or “swing,” according to the Romney-Obama era in which political demarcations were a little more even and a little more sensical. That should give the reader a vague sense of what the gun laws in each state are like. As you can see, there is not only no correlation whatsoever with gun ownership rate, there’s also no correlation whatsoever with state level politics.

But hey, we are a relatively unique situation on the planet, given our high ownership rates and high number of guns owned per capita, so surely there’s some supporting data linking gun ownership with gun homicide elsewhere, right?

So off we go to Wikipedia again, to their page listing countries by firearm related death rates. If Wikipedia gives you the willies, you’re going to have a harder time compiling this table on your own, because every line in it is linked to a different source. Many of them, however, come from http://www.gunpolicy.org. Their research is supported by UNSCAR, the UN Trust Facility Supporting Cooperation on Arms Regulation, so it is probably pretty reasonable data. They unfortunately do not have gun ownership rates, but do have “guns owned per 100 inhabitants,” which is a similar set we can compare against. And we drop that into Excel, or use the back of our napkin again, and now we are surely going to see how gun ownership drives gun homicide.

Well that’s disappointing.

Remember we are looking for an R^2 value close to 1, or hopefully at least up around 0.7. The value on this one is 0.0107. Garbage.

….

So let’s briefly recap. Gun Murder Rate is not correlated with firearm ownership rate in the United States, on a state by state basis. Firearm Homicide Rate is not correlated with guns per capita globally. It’s not correlated with guns per capita among peaceful countries, nor among violent countries, nor among European countries. So what in the heck is going on in the media, where we are constantly berated with signaling indicating that “more guns = more murder?”

One: They’re sneaking suicide in with the data, and then obfuscating that inclusion with rhetoric.
This is the biggest trick I see in the media, and very few people seem to pick up on it. Suicide, numerically speaking, is around twice the problem homicide is, both in overall rate and in rate by gun. Two thirds of gun deaths are suicides in the USA. And suicide rates are correlated with gun ownership rates in the USA, because suicide is much easier, and much more final, when done with a gun. If you’re going to kill yourself anyway, and you happen to have a gun in the house, then you choose that method out of convenience. Beyond that, there’s some correlation between overall suicide and gun ownership, owing to the fact that a failed suicide doesn’t show up as a suicide in the numbers, and suicides with guns rarely fail.

….

Two: They’re cooking the homicide data.
The most comprehensive example of this is probably this study from the American Journal of Public Health. It’s widely cited, and was very comprehensive in its analytical approach, and was built by people I admire and whom I admit are smarter than me. But to understand how they ended up with their conclusions, and whether those conclusions actually mean what the pundits say they mean, we have to look at what they actually did and what they actually concluded.

First off, they didn’t use actual gun ownership rates. They used fractional suicide-by-gun rates as a proxy for gun ownership. This is apparently a very common technique by gun policy researchers, but the results of that analysis ended up being very different from the ownership data in the Injury Prevention journal in my first graph of the article. The AJPH study had Hawaii at 25.8% gun ownership rate, compared to 45% in IP, and had Mississippi at 76.8% gun ownership rate, compared to 42.8% in IP. Could it be that suicidal people in Hawaii prefer different suicide methods than in Mississippi, and that might impact their proxy? I don’t know, but it would seem to me that the very use of a proxy at all puts the study on a very sketchy foundation. If we can’t know the ownership rate directly, then how can we check that the ratio of gun suicides properly maps over to the ownership rate? Further, the fact that the rates are so different in the two studies makes me curious about the sample size and sampling methods of the IP study. We can be absolutely certain that at least one of these studies, if not both of them, are wrong on the ownership rate data set. We know this purely because the data sets differ. They can’t both be right. They might both be wrong.

 

Series roundup:

In the second article, we unpack “gun death” statistics and look carefully at suicide.

In the third article, we debunk the “gun homicide epidemic” myth.

In the fourth article, we expand upon why there is no link between gun ownership and gun homicide rate, and why gun buybacks and other gun ownership reduction strategies cannot work.

In the fifth article, we discuss why everyone should basically just ignore school shootings.

The sixth article presents a solution free of culture wars, and the finale isn’t about guns at all.