Climate Change Questions

From Watts Up With That:

The issue of climate change (aka global warming) depends on the answers to three questions being “yes”.
1) Is the planet getting warmer?
2) Is the warming due to human activity?
3) Is this warming going to lead to disaster?

It seems 96% of atmospheric scientists answer question 1 as “yes”.

In another survey, 29% of scientists surveyed say it’s entirely human activity, and 38% say “mostly” (60-80%) human activity.

In a third survey, half believe the effects will be primarily (47%) or exclusively (3%) negative over the next half century.

So, the consensus for an anthropogenic climate change disaster is
96% X 67% X 50% = 32%.

It would be interesting to see the answer to a question 2A) “Can humans significantly reverse the warming of the planet?”

Charts and details at the link up top.

CO2 Emissions Down in US

From The Daily Caller

Greenhouse gas emissions continued to plummet during President Donald Trump’s first year in office, according to new Environmental Protection Agency (EPA) data.

Based on data from more than 8,000 large facilities, EPA found greenhouse gas emissions, mostly carbon dioxide, fell 2.7 percent from 2016 to 2017. Emissions from large power plants fell 4.5 percent from 2016 levels, according to EPA.

“Thanks to President Trump’s regulatory reform agenda, the economy is booming, energy production is surging, and we are reducing greenhouse gas emissions from major industrial sources,” EPA acting Administrator Andrew Wheeler said in a statement.

Earlier this year, the Energy Information Administration reported that per-capita greenhouse gas emissions hit a 67-year low during Trump’s first year in office.

This appears to be the source of the data.

Race and the Race for the White House: On Social Research in the Age of Trump | SpringerLink

Source: Race and the Race for the White House: On Social Research in the Age of Trump | SpringerLink

From the abstract:

This essay presents a series of case studies showing how analyses of the roles of race and racism in the 2016 U.S. Presidential Election seem to have been systematically distorted as a result. However, motivated reasoning, confirmation bias, prejudicial study design, and failure to address confounds are not limited to questions about race (a similar essay could have been done on the alleged role of sexism/ misogyny in the 2016 cycle, for instance). And while Trump does seem to generate particularly powerful antipathy from researchers – perhaps exacerbating negative tendencies – ideologically-driven errors likely permeate a good deal of social research. Presented evidence suggests that research with strong adversarial or advocacy orientations may be most susceptible to systemic distortion. Activist scholars and their causes may also be among those most adversely impacted by the resultant erosion of research reliability and credibility.

The article is behind a paywall, but Campus Watch offers commentary:

One example of this phenomena can be seen in the April 2017 Washington Post article “Racism motivated Trump voters more than authoritarianism,” by Thomas Wood, who teaches political science classes at Ohio State University.

While Wood cites survey data to claim that Trump voters were especially motivated by racism, a closer analysis by al-Gharbi reveals that Wood’s arguments about Trump voters can’t be substantiated from the data cited in the article.

“According to Wood’s own data, whites who voted for Trump are perhaps less racist than those who voted for Romney,” al-Gharbi explains, adding that “not only were they less authoritarian than Romney voters, but less racist too!”

“Unfortunately, Wood declined to consider how Trump voters differed from Romney voters…instead focusing on the gap between Democrats and Republicans in 2016, in the service of a conclusion his data do not support,” he adds.

Bees are not in Danger – Cedar Writes

Source: Bees are not in Danger – Cedar Writes

….Looking, for the moment, at honeybees in particular, we are seeing that far from being devastated by Colony Collapse Disorder, there has been an increase in their numbers. The pesticide most often blamed for bee death is neonicotinoids, which are applied to crops and taken up into the plants to kill pests when they eat the plant. Which bees do not, so you may be pardoned confusion over how bees are affected by this. The neonics are taken up into pollen, which bees do eat. However, “there is no scientific evidence to link neonicotinoids as the major cause of colony declines” even when the bees were fed 20 times the amount normally expected to be found in their usual foraging. Science has shown that, in direct opposition to what is being shown in media, low doses of pesticides and bacteria in combination can actually have a beneficial effect on bees. But the EU banned neonics… only that “legislation was at no time based on a direct link on bee mortality.” In fact, honeybees in Europe are overall healthier than they were in the past, as shown by overwintering hive survival.

And what about the wild bees? Well, there are not a lot of species that visit the crops, and none of the endangered species contribute to agricultural pollination. What does this mean? That we shouldn’t do anything about the poor endangered species of bees? No… but what it does tell me is that they are not endangered because of pesticides. They don’t visit the same places where pesticides are used. And the bees who are exposed? Can be encouraged greatly with simple conservations measures like leaving strips of wildflowers blooming in between fields.

I suspect a lot of people aren’t going to want to hear this. It may mean that all their activism has been a waste of time.

Trigger Warning: Trigger warning ahead

A new study suggests that trigger warnings may actually increase student vulnerability to offensive or troubling material.

Is it possible that “trigger warnings” — warnings to students and others that they are about to encounter potentially offensive or disturbing material — do more harm than good? A new study suggests that may be the case.

Trigger warnings may inadvertently undermine some aspects of emotional resilience. Further research is needed on the generalizability of our findings, especially to collegiate populations and to those with trauma histories.

Source

Is IQ real?

(Or does it default to integer?)*

Jordan Peterson has some comments here.

From the autogenerated transcript:

1:11 one of the things I have to tell you
01:13 about it IQ research is that if you
01:15 don’t buy IQ research you might as well
01:19 throw away all the rest of psychology
01:21 and the reason for that is that the
01:24 psychologists first of all who developed
01:26 intelligence testing were among the
01:28 early psychologists who instantiated the
01:30 statistical techniques that all
01:32 psychologists use to verify and test all
01:35 of their hypotheses so you end up
01:37 throwing the baby out with the bathwater
01:39 and the IQ people have defined
01:42 intelligence in a more stringent
01:44 stringent and accurate way than we’ve
01:47 been able to define almost any other
01:50 psychological construct and so if you
01:52 toss out the one that’s most well
01:54 defined then you’re kind of stuck with
01:55 the problem of what are you going to do
01:57 with all the ones that you have left
01:58 over that are nowhere near as
01:59 well-defined
02:00 or as well measured or as or as or or
02:04 whose predictive validity is much less
02:07 and has been demonstrated with much less
02:09 vigor and clarity

Also here:

00:01 so IQ is reliable invalid [and valid – ed] it’s more
00:05 reliable and valid than any other
00:06 psychometric test ever designed by
00:09 social scientists the IQ claims are more
00:11 psychometrically rigorous than any other
00:13 phenomena phenomenon that’s been
00:16 discovered by social scientists

Also of interest:

I
08:32 should tell you how to make an IQ test
08:33 is actually really easy and you need to
08:36 know this to actually understand what IQ
08:38 is so imagine that you generated a set
08:42 of 10,000 questions okay about anything
08:45 it could be math problems they could be
08:47 general knowledge they could be
08:49 vocabulary they could be multiple choice
08:50 it really doesn’t matter what they’re
08:52 about as long as they require abstract
08:53 to solve so they’d be formulated
08:56 linguistically but mathematically would
08:58 also apply and then you have those
09:01 10,000 questions now you take a random
09:03 set of a hundred of those questions and
09:05 you give them to a thousand people and
09:08 all you do is sum up the answers right
09:10 from so some people are gonna get most
09:12 of them right and some some of them are
09:13 going to get most of them wrong you just
09:14 rank order the people in terms of their
09:16 score correct that for age and you have
09:19 IQ that’s all there is to it and what
09:22 you’ll find is that no matter which
09:24 random set of a hundred questions you
09:26 take the people at the top of one random
09:28 set will be at the top of all the others
09:30 with very very very high consistency so
09:34 one thing you need to know is that if
09:36 any social science claims whatsoever are
09:39 correct then the IQ claims are correct
09:44 because the IQ claims are more
09:46 psychometrically rigorous than any other
09:48 phenomena phenomenon that’s been
09:51 discovered by social scientists

*  Fortran reference

When does correlation imply cause?

If correlation doesn’t imply causation, then what does?

Of course, while it’s all very well to piously state that correlation doesn’t imply causation, it does leave us with a conundrum: under what conditions, exactly, can we use experimental data to deduce a causal relationship between two or more variables?

The standard scientific answer to this question is that (with some caveats) we can infer causality from a well designed randomized controlled experiment. Unfortunately, while this answer is satisfying in principle and sometimes useful in practice, it’s often impractical or impossible to do a randomized controlled experiment. And so we’re left with the question of whether there are other procedures we can use to infer causality from experimental data. And, given that we can find more general procedures for inferring causal relationships, what does causality mean, anyway, for how we reason about a system?

It might seem that the answers to such fundamental questions would have been settled long ago. In fact, they turn out to be surprisingly subtle questions. Over the past few decades, a group of scientists have developed a theory of causal inference intended to address these and other related questions. This theory can be thought of as an algebra or language for reasoning about cause and effect. Many elements of the theory have been laid out in a famous book by one of the main contributors to the theory, Judea Pearl. Although the theory of causal inference is not yet fully formed, and is still undergoing development, what has already been accomplished is interesting and worth understanding.

In this post I will describe one small but important part of the theory of causal inference, a causal calculus developed by Pearl. This causal calculus is a set of three simple but powerful algebraic rules which can be used to make inferences about causal relationships. In particular, I’ll explain how the causal calculus can sometimes (but not always!) be used to infer causation from a set of data, even when a randomized controlled experiment is not possible. Also in the post, I’ll describe some of the limits of the causal calculus, and some of my own speculations and questions.

The post is a little technically detailed at points. However, the first three sections of the post are non-technical, and I hope will be of broad interest. Throughout the post I’ve included occasional “Problems for the author”, where I describe problems I’d like to solve, or things I’d like to understand better. Feel free to ignore these if you find them distracting, but I hope they’ll give you some sense of what I find interesting about the subject. Incidentally, I’m sure many of these problems have already been solved by others; I’m not claiming that these are all open research problems, although perhaps some are. They’re simply things I’d like to understand better. Also in the post I’ve included some exercises for the reader, and some slightly harder problems for the reader. You may find it informative to work through these exercises and problems.

Before diving in, one final caveat: I am not an expert on causal inference, nor on statistics. The reason I wrote this post was to help me internalize the ideas of the causal calculus. Occasionally, one finds a presentation of a technical subject which is beautifully clear and illuminating, a presentation where the author has seen right through the subject, and is able to convey that crystalized understanding to others. That’s a great aspirational goal, but I don’t yet have that understanding of causal inference, and these notes don’t meet that standard. Nonetheless, I hope others will find my notes useful, and that experts will speak up to correct any errors or misapprehensions on my part.

Simpson’s paradox
Let me start by explaining two example problems to illustrate some of the difficulties we run into when making inferences about causality. The first is known as Simpson’s paradox. To explain Simpson’s paradox I’ll use a concrete example based on the passage of the Civil Rights Act in the United States in 1964.

In the US House of Representatives, 61 percent of Democrats voted for the Civil Rights Act, while a much higher percentage, 80 percent, of Republicans voted for the Act. You might think that we could conclude from this that being Republican, rather than Democrat, was an important factor in causing someone to vote for the Civil Rights Act. However, the picture changes if we include an additional factor in the analysis, namely, whether a legislator came from a Northern or Southern state. If we include that extra factor, the situation completely reverses, in both the North and the South. Here’s how it breaks down:

North: Democrat (94 percent), Republican (85 percent)

South: Democrat (7 percent), Republican (0 percent)

Yes, you read that right: in both the North and the South, a larger fraction of Democrats than Republicans voted for the Act, despite the fact that overall a larger fraction of Republicans than Democrats voted for the Act.

You might wonder how this can possibly be true. I’ll quickly state the raw voting numbers, so you can check that the arithmetic works out, and then I’ll explain why it’s true. You can skip the numbers if you trust my arithmetic.

North: Democrat (145/154, 94 percent), Republican (138/162, 85 percent)

South: Democrat (7/94, 7 percent), Republican (0/10, 0 percent)

Overall: Democrat (152/248, 61 percent), Republican (138/172, 80 percent)

One way of understanding what’s going on is to note that a far greater proportion of Democrat (as opposed to Republican) legislators were from the South. In fact, at the time the House had 94 Democrats, and only 10 Republicans. Because of this enormous difference, the very low fraction (7 percent) of southern Democrats voting for the Act dragged down the Democrats’ overall percentage much more than did the even lower fraction (0 percent) of southern Republicans who voted for the Act.

(The numbers above are for the House of Congress. The numbers were different in the Senate, but the same overall phenomenon occurred. I’ve taken the numbers from Wikipedia’s article about Simpson’s paradox, and there are more details there.)

If we take a naive causal point of view, this result looks like a paradox. As I said above, the overall voting pattern seems to suggest that being Republican, rather than Democrat, was an important causal factor in voting for the Civil Rights Act. Yet if we look at the individual statistics in both the North and the South, then we’d come to the exact opposite conclusion. To state the same result more abstractly, Simpson’s paradox is the fact that the correlation between two variables can actually be reversed when additional factors are considered. So two variables which appear correlated can become anticorrelated when another factor is taken into account.

You might wonder if results like those we saw in voting on the Civil Rights Act are simply an unusual fluke. But, in fact, this is not that uncommon. Wikipedia’s page on Simpson’s paradox lists many important and similar real-world examples ranging from understanding whether there is gender-bias in university admissions to which treatment works best for kidney stones. In each case, understanding the causal relationships turns out to be much more complex than one might at first think.

Causal models
To help address problems like the two example problems just discussed, Pearl introduced a causal calculus. In the remainder of this post, I will explain the rules of the causal calculus, and use them to analyse the smoking-cancer connection. We’ll see that even without doing a randomized controlled experiment it’s possible (with the aid of some reasonable assumptions) to infer what the outcome of a randomized controlled experiment would have been, using only relatively easily accessible experimental data, data that doesn’t require experimental intervention to force people to smoke or not, but which can be obtained from purely observational studies.

To state the rules of the causal calculus, we’ll need several background ideas. I’ll explain those ideas over the next three sections of this post. The ideas are causal models (covered in this section), causal conditional probabilities, and d-separation, respectively. It’s a lot to swallow, but the ideas are powerful, and worth taking the time to understand. With these notions under our belts, we’ll able to understand the rules of the causal calculus

Read the blog post for the rest, with diagrams.

The Conquest of Climate – Progress and Peril

Source: The Conquest of Climate – Progress and Peril

How bad will climate change be? Not very.

No, this isn’t a denialist screed. Human greenhouse emissions will warm the planet, raise the seas and derange the weather, and the resulting heat, flood and drought will be cataclysmic.

Cataclysmic—but not apocalyptic. While the climate upheaval will be large, the consequences for human well-being will be small. Looked at in the broader context of economic development, climate change will barely slow our progress in the effort to raise living standards.

To see why, consider a 2016 Newsweek headline that announced “Climate change could cause half a million deaths in 2050 due to reduced food availability.” The story described a Lancet study, “Global and regional health effects of future food production under climate change,” [1] that made dire forecasts: by 2050 the effects of climate change on agriculture will shrink the amount of food people eat, especially fruits and vegetables, enough to cause 529,000 deaths each year from malnutrition and related diseases. The report added grim specifics to the familiar picture of a world made hot, hungry, and barren by the coming greenhouse apocalypse.

But buried beneath the gloomy headlines was a curious detail: the study also predicts that in 2050 the world will be better fed than ever before. The “reduced food availability” is only relative to a 2050 baseline when food will be more abundant than now thanks to advances in agricultural productivity that will dwarf the effects of climate change. Those advances on their own will raise per-capita food availability to 3,107 kilocalories per day; climate change could shave that to 3,008 kilocalories, but that’s still substantially higher than the benchmarked 2010 level of 2,817 kilocalories—and for a much larger global population. Per-capita fruit and vegetable consumption, the study estimated, will rise by 6.1 percent and meat consumption by 5.4 percent. The poorest countries will benefit most, with food availability rising 14 percent in Africa and Southeast Asia. Even after subtracting the 529,000 lives theoretically lost to climate change, the study estimates that improved diets will save a net 1,348,000 lives per year in 2050.