Although Hydroxychloroquine remains an approved drug and doctors can still prescribe it off-label, the FDA’s and NIH’s opinions have significant influence. State governments and medical boards adhere to the FDA opinion, in their subsequent recommendations.
A very real question should be asked by real journalists: Did some US media outlets encourage people to NOT take a drug that could have saved their lives- all because they hate Trump? https://t.co/QiSiXwQpmr — Richard Grenell (@RichardGrenell) July 3, 2020 The media went bonkers when President Donald John Trump called a regimen of hydroxychloroquine and azithromycin “a real chance to be one of the biggest game changers in the history of medicine.” For example, the Washington Post reported — without evidence — “Drug promoted by Trump as corona virus ‘game changer’ increasingly linked to deaths.” Thousands of other news organizations who posted similar articles.
Not only did Trump act swiftly and appropriately to address coronavirus, our culture of cleanliness and market-based medicine gives us a big edge over China. Let me preface what I’m about to write by saying that, when I grew up in San Francisco, it’s the absolute truth to say that most of my best friends…Healthcare in China is bad and China is dirty — Watcher of Weasels
I chose not to argue that Obamacare was going to collapse and be repealed in its entirety, but rather, that Obamacare would not, and could not, be the program that had been promised or intended. It had already failed to deliver on key promises for coverage, affordability and of course, the infamous promise that “if you like your doctor, you can keep your doctor.” It was also dangerously unstable, requiring steady executive intervention just to keep the program from collapsing. I argued that these executive interventions, enthusiastically supported by the law’s proponents, were setting a precedent that would eventually be used against it. Worried that health care was too hostage to the vicissitudes of the markets, Democrats had instead made it the prisoner of politics.
“Essentially they’ve made it so that Republicans can undo two-thirds of this law with a stroke of the presidential pen,” I said at the close of my opening statement. “Obamacare is now beyond rescue. The administration has destroyed their own law in order to save it.” Four years later, we are watching those dominos fall.
Remember how we ended up with the particular version of Obamacare that became law. Democrats had 60 votes in the Senate, and a growing sense that they were on the verge of a second New Deal. They thought they didn’t need Republicans, and they thought they couldn’t get Republicans, so they made little effort to involve Republicans in drafting, beyond offering token concessions to a handful of liberal Republicans who might have made nice bipartisan window-dressing at the signing ceremony. Republicans, predictably, spent a year talking down the bill, and by the time it was nearing passage, a majority of the public opposed it.
Then Massachusetts — Massachusetts! — sent Republican Scott Brown to Ted Kennedy’s old Senate seat, a phenomenon that was widely (and in my view correctly) put down to a desire to block Obamacare. Rather than saying “if we’ve lost Massachusetts, we’ve lost America,” Democrats rushed a draft version of the bill into law through a parliamentary procedure that obviated the need for Brown’s vote.
This draft bill, unsurprisingly, had problems. It also overhauled almost a fifth of the economy. It also had the implacable hostility of the opposition, and a public that was pretty angry at politicians for passing it. By the end of the year, Democrats had lost control of Congress, and with it, any hope of making all the changes they’d fantasized after they passed the bill and found out what was (and wasn’t) in it.
That put Obama in the nasty situation of presiding over a program that couldn’t work as written, and couldn’t be legislatively altered. So he proceeded with the only avenue open to him: dubious executive measures that temporarily shored up the program, but weakened even further the slim foundation of political legitimacy that held it up. And here we are seven years later, watching as one by one, those supports sway or snap.
And thanks in part to the voter revolt that Obamacare triggered, those powers have now been handed over to a president who doesn’t simply take political legitimacy for granted, but seems actively hostile to it. The scramble to pass and sustain Obama’s signature initiative may have badly hurt the cause: to make the health-care system fairer, broader and more efficient.
If Obamacare dies now, in this way, the country will be worse off than if it had never passed. And I’m not just talking about the growing notion among both parties that the idea of elections is to get into power and exercise whatever power you can, by whatever means you can get away with, until voters take your toys away again.
In the worst-case scenario, large swathes of the country will have “bare” individual markets where everyone will be magnificently equal in their inability to purchase insurance. And the memory of Obamacare staggering onward for years, down a trail of broken promises and underwhelming results, will make voters reluctant to trust any politician who suggests that we embark on another such journey.
Is Obamacare beyond rescue? If not, it could certainly use some. And at this point, it’s hard to see who is going to swoop in to save the day.
Let’s be clear for a moment — many disabled workers will be unaffected. Physical disabilities may require accommodations, but many of those workers are making well above minimum wage as it is. Those that aren’t do have the potential to learn valuable skills that would make them employable otherwise.
No, this pretty much impacts only the mentally disabled.
At $15 per hour, employers are going to require more from their workers, especially with it being such a huge jump in wages. They’re going to expect people to take on more tasks than those normally associated with minimum wage work.
“But the Americans With Disabilities Act says … “
Doesn’t matter. If the work requires ability that mentally disabled workers can’t provide, the ADA doesn’t mean squat. The ADA calls for “reasonable accommodations” for disabled workers, but if the job requires something that the worker can’t do and there’s no accommodation possible, the ADA allows the person to not be hired.
Seattle thinks it’s doing wonderful things for all workers by refusing to permit employers to dip below the mandated minimum wage. But now the mentally disabled who only want to work, who only want a sense of independence, are out of luck.
I took the Introduction to Psychology class in college as part of my General Education requirement. While taking it, and for some time afterward, I hung around in the Psych Student Lounge and read some of the material on hand there. One item I found, in the late 1970s, was a brief article on taboo subjects in psychological research. One of those topics was “Race and IQ”. Not much has changed.
In the latest issue of National Review, John McWhorter has a challenging and thought-provoking essay about the topic of race and IQ — specifically, about whether that topic should even be up for discussion in liberal-arts classrooms and in the media, as opposed to in scientific journals. He suggests not, as there is nothing to gain from discussing it.
I read McWhorter’s essay with special interest because I have violated the norm he proposes. I have written about race and IQ on numerous occasions — and for a general audience, as I am not even a specialist myself. See, for example, my 2013 essay in this space about Jason Richwine’s departure from the Heritage Foundation, as well as my RealClear reviews of Nicholas Wade’s A Troublesome Inheritance and Richard J. Herrnstein and Charles Murray’s The Bell Curve (on its 20th anniversary).
In light of McWhorter’s essay, I thought it would be worth explaining how I became interested in this topic and why I participate in public discussions of it. Here goes.
I suppose I can blame this on my wife. Back when we were dating in college — and she was a self-described socialist, and I thought democratizing Iraq sounded like a fantastic idea — she insisted I take a class offered by the sociology department called “Social Inequality.” It would open my mind. I don’t even remember what led up to it, but at one point the professor informed us that some amorphous “they” had proven that “race isn’t genetic.” Murmurs of amazement spread among my classmates. “That sounds like bullsh**,” I thought.
Back at my dorm I turned to Google and quickly sussed out one of the basic truths McWhorter mentions: Categorical claims that “race isn’t genetic” amount to either bad science or word games. One of my most amusing discoveries that day was the argument that when forensic anthropologists identify someone’s race from nothing but a skeleton, all they’re really identifying is the region the person’s ancestors came from, which is totally different. I later learned that, if given a collection of DNA samples, scientists can predict the racial self-identifications of the people the samples come from with nearly 100 percent accuracy.
Are the precise boundaries we draw between racial categories subjective? Of course. But even our casual classifications strongly reflect ancestry, and people with different ancestries have recognizably different genetic profiles. To insist otherwise is ridiculous.
At any rate, one year around 2005 or so I pulled out all the stops. I read The Bell Curve, including all the technical appendices. I read not one but two essay collections responding to The Bell Curve. I read a bunch of other stuff online. And on the underlying scientific issue here, I came to the same conclusion McWhorter does: The evidence doesn’t justify a verdict one way or the other. Genes do differ among racial groups, measured IQs differ on average as well, and some of the genes that differ might affect IQ. There’s no reason this can’t be the case. We just don’t know whether it is yet.
My experience provides a window into (a) how it is that people become interested in this topic and (b) what material is available to those who do. Regarding (a), it’s certainly true that if three different people had taken McWhorter’s advice and simply steered clear of the issue — my sociology professor, Eric Alterman, and my classmate — I might never have become so intrigued.
But I rather doubt that an effort to further stigmatize the discussion of race and IQ could have more than a minor effect on how often people actually discuss it. And even if people did stop discussing it openly, I suspect many would still become curious about the topic and research it online, where people feel considerably freer to explore the taboo. This subject sits at the nexus of numerous others that are inherently interesting, for perfectly legitimate reasons. How did evolution shape humanity as a whole, and to what extent did it shape different populations differently? Why do we have such stark inequality among different groups of people, and not just blacks and whites in the U.S.?
So regarding (b) above, the big question is: When people start hunting around for information online, what do you want them to find? If mainstream outlets decline to cover the subject, all that will be left are what McWhorter calls “dense, obscure academic journals” — and fringe websites whose proprietors don’t feel bound by society’s norms. Do you think the typical Googler is going to wade through the technical pros and cons of the “method of correlated vectors” (a heavily criticized technique suggesting that the best measures of “general intelligence” also have the biggest black–white gaps), or do you think he’ll turn to the more accessible option, especially if it’s at least presented in a reasoned tone?
There’s another reason too: Whether we like it or not, scientists are going to answer these questions sooner or later. They are already in the process of figuring out exactly what genes shape our brains and how they differ among individuals and groups; even McWhorter would not stop this research, and it will be carried out in other countries if American scientists keep away from it. I think we should be intellectually prepared for the possibility that this line of work won’t turn out the way we want.
In A Dream Deferred, Shelby Steele wrote that it would have “far-right and, I have to say, even fascistic ideological implications” if genes contributed to the black–white IQ gap. Responding to my above-mentioned piece about Jason Richwine, Will Wilkinson of The Economist wrote that if genetic, group-level IQ differences exist, it forces us to “acknowledge that the racists were right all along — that racism has, to some extent, a valid scientific basis.”
I submit that it’s better to establish why these conclusions are wrong before scientists uncover any bombshells about IQ or other sensitive traits. They are wrong because population-level averages cannot justify discrimination against individuals, and because genetic abilities and propensities — measured at the group or individual level — cannot justify inhumane treatment. After all, we stopped sterilizing low-IQ individuals long ago, despite a wealth of research showing that individual-level differences in IQ are roughly half genetic. The immorality of fascism and racism stems from the moral equality of all human beings; it cannot rest on an assumption that all human beings or groups of them are exactly the same.
If we achieve that, though, what we should aspire to is not a “brutally open, race-based meritocratic consensus” but an end to racial bean-counting. If Americans of all races have the opportunity to achieve what their natural talents make possible, any remaining statistical gaps among races should become a non-issue. In other words, it’s at that point we should stop talking about all this, and I think we very well might.
Thomas Sowell has written a great deal about race and culture, and about the statistical disparities in which populations are represented in which areas of life. Like it or not, bean counter or not, the fact is that different demographic groups have different levels of interest in different things. Not every race evince the same proportion of people interested in archery. Certain jobs will attract more of this ethnicity than that ethnicity. These are cultural patterns that prove to be very resistant to change, and to follow populations as they migrate around the planet.
I doubt there’s a genetic reason why Germans have more brewers of beer than other nationalities. In fact, there’s probably no good reason for it at all. It just is.
We’ve made something of a peace with physical differences, but psychological difference are proving a lot harder to swallow.
So, for my friend who claims that the vast bulk of the research shows benefits from increasing the minimum wage, here’s the abstract:
We review the burgeoning literature on the employment effects of minimum wages – in the United States and other countries – that was spurred by the new minimum wage research beginning in the early 1990s. Our review indicates that there is a wide range of existing estimates and, accordingly, a lack of consensus about the overall effects on low-wage employment of an increase in the minimum wage. However, the oft-stated assertion that recent research fails to support the traditional view that the minimum wage reduces the employment of low-wage workers is clearly incorrect. A sizable majority of the studies surveyed in this monograph give a relatively consistent (although not always statistically significant) indication of negative employment effects of minimum wages. In addition, among the papers we view as providing the most credible evidence, almost all point to negative employment effects, both for the United States as well as for many other countries. Two other important conclusions emerge from our review. First, we see very few – if any – studies that provide convincing evidence of positive employment effects of minimum wages, especially from those studies that focus on the broader groups (rather than a narrow industry) for which the competitive model predicts disemployment effects. Second, the studies that focus on the least-skilled groups provide relatively overwhelming evidence of stronger disemployment effects for these groups.
An exploration of three possible definitions of “racism”.
As usual, the answer is that “racism” is a confusing word that serves as a mishmash of unlike concepts. Here are some of the definitions people use for racism:
1. Definition By Motives: An irrational feeling of hatred toward some race that causes someone to want to hurt or discriminate against them.
2. Definition By Belief: A belief that some race has negative qualities or is inferior, especially if this is innate/genetic.
3. Definition By Consequences: Anything whose consequence is harm to minorities or promotion of white supremacy, regardless of whether or not this is intentional.
Part of the problem we have in society is that we can only directly observe the third, and we’re not always very careful about correlation and causation.
Consider some business, let’s say a daycare center, that we know discriminates against black job-seekers. If we ask them why, they say “Because black people are criminal”. This sounds like just about the most typical and obvious example of racism possible.
But there’s actually a lot of really good scholarship on this exact situation, and it helps provide a different perspective. It starts like this – a while ago, criminal justice reformers realized that mass incarceration was hurting minorities’ ability to get jobs. 4% of white men will spend time in prison, compared to more like 16% of Hispanic men and 28% of black men. Many employers demanded to know whether a potential applicant had a criminal history, then refused to consider them if they did. So (thought the reformers) it should be possible to help minorities have equal opportunities by banning employers from asking about past criminal history.
The actual effect was the opposite – the ban “decreased probability of being employed by 5.1% for young, low-skilled black men, and 2.9% for young, low-skilled Hispanic men.”
In retrospect, this makes sense. Daycare companies really want to avoid hiring formerly-imprisoned criminals to take care of the kids. If they can ask whether a certain employee is criminal, this solves their problem. If not, they’re left to guess. And if they’ve got two otherwise equally qualified employees, and one is black and the other’s white, and they know that 28% of black men have been in prison compared to 4% of white men, they’ll shrug and choose the white guy.
Is this racist? Is this “statistical discrimination”? Describe it with whatever word you want. The point is that they have understandable motives (don’t hire criminals to take care of the kids), accurate beliefs, and in their shoes you might do the same. More important, once you give them the tools they need to solve their problems without racial discrimination – you let them see applicants’ criminal histories – they have no further desire to discriminate and your problem is solved.
If you tried to solve this by sending these people to sensitivity training, you would fail. IF you tried to solve this by firing these people, then the people who replaced them would have the same incentives, and you would fail again. If you try to solve it by realizing that racial animus has no role at all in this scenario, and daycare owners just want to do what’s best for their kids, then you can provide them with the tools they need to do that, and solve the racial discrimination at the same time.
One problem with attributing consequential and incentive-driven behavior to racist motives is that it’s a very good way to alienate people who might otherwise be sympathetic to your cause. If I have to choose between being called a racist or incurring serious harm because I wanted to demonstrate my virtue, I may well choose being called a name. And once I’m being called a racist, I have no real incentive not to adopt the whole package.
For example, and this is important, I’m seeing the term “alt-right” used the way people use “fascist” — a label meaning “someone I disagree with”. But the people I’ve seen stepping up as spokescritters for the alt-right actually are racist. If people are being saddled with the label because of a failure to toe the leftist party line, there’s a chance they will decide they might as well adopt the alt-right platform, racism and all.
To be clear – I am not saying that racism doesn’t exist, I’m not saying that we should ignore racism, I’m not saying that minorities should never be able to complain about racism. I’m saying that it’s very dangerous to treat “racism” as a causal explanation, that it might not tell you anything useful about the world, and that’s a crappy lever to use if you want to change behavior.
And I’m not saying that it’s not useful to think of some of these things as places where there’s an opportunity for racial change. If a daycare owner is really interested in redressing racial inequality, they can hire minorities even if it’s against their incentives and self-interest (although it’s unclear why the owner should prefer that opportunity to other opportunities, like donating some of their profits to the NAACP.)
And I’m not saying that there will never be a case that’s impossible to break down into non-racist motives. Heck, I’m not even saying there aren’t some honest-to-goodness murderists out there. But I am saying we should at least try. Not because it’s necessarily costless. Not because there isn’t a risk of false negatives.
We should try because it’s the only alternative to having another civil war.
The result of one early experiment in a citywide $15 minimum wage is an ominous sign for the state’s poorer inland counties as the statewide wage floor creeps toward the mark.
Consider San Francisco, an early adopter of the $15 wage. It’s now experiencing a restaurant die-off, minting jobless hash-slingers, cashiers, busboys, scullery engineers and line cooks as they get pink-slipped in increasing numbers. And the wage there hasn’t yet hit $15.
As the East Bay Times reported in January, at least 60 restaurants around the Bay Area had closed since September alone.
A recent study by Michael Luca at Harvard Business School and Dara Lee Luca at Mathematica Policy Research found that every $1 hike in the minimum wage brings a 14 percent increase in the likelihood of a 3.5-star restaurant on Yelp! closing.
Another telltale is San Diego, where voters approved increasing the city’s minimum wage to $11.50 per hour from $10.50, this after the minimum wage was increased from $8 an hour in 2015 – meaning hourly costs have risen 43 percent in two years.
The cost increases have pushed San Diego restaurants to the brink, Stephen Zolezzi, president of the Food and Beverage Association of San Diego County, told the San Diego Business Journal. Watch for the next mass die-off there.
File this under “unintended consequences”.
In the past three years, male students accused of sexual misconduct have filed hundreds of lawsuits, charging that they were the victims of both false allegations and school procedures that failed to properly vet the claims.
Jazz Shaw comments:
Suing a woman who was allegedly raped? But there have been simply too many cases dredged up where the charges either turned out to be vastly overstated or completely unfounded, combined with instances where there simply were no legal protections in place for the accused that what else can be done?
The truly sad part of this, as in so many instances, is that it’s really not the fault of the woman bringing the allegations. It’s the social justice warrior climate permeating so many schools, filling everyone’s heads with stories of a “campus rape culture” and a distrust of law enforcement and the court system. It’s easy to see why so many would disregard the normal protections and requirements of the legal system and listen to professors or administrators whispering in their ears, telling them that they can simply “handle it at school” so they won’t need to get the cops involved.
This, of course, is a betrayal of not only the victim and the accused, but of all other women in the surrounding community. As has been repeatedly noted, if a rape takes place, these college kangaroo courts can’t do more than issue a reprimand and boot the alleged offender out of school. If he was actually guilty, this basically means that you just turned a rapist loose on the rest of the community with far more time on his hands. Tell me, advocates of such systems… is there nothing worrisome to you about such a scheme?
No woman needs to be “put on trial twice” in these situations if you actually put the accused through a real trial the first time. That means filing police reports, having them gather evidence, interview witnesses and bring charges. And the accused gets to mount a legal defense and have his day in court as well. (I’ll say “his” because it’s nearly always a man.) Yes, it can be uncomfortable for any victim of any crime and I have all the sympathy in the world. But in case it’s any consolation, if a crime actually did take place and the guy is guilty, the judge can lock him up for a very long time and I’ll be there right alongside you cheering for the most severe sentence possible.
Merv Benson at Prairie Pundit notes:
I have noted before how ill-suited colleges and universities are for handling these matters. Many of them routinely deny the accused due process rights including the right to an attorney and the ability to cross-examine their accuser. What they should be required to do is turn the matter over to local law enforcement such as a district attorney’s office to determine if there is sufficient evidence of a crime.
Now attorneys for the accused are suing their accuser alleging defamation which at least gets the matter in front of a real court and not some campus star-chamber proceeding. Colleges who thought they were protecting the accusers now find those same accusers having to pay attorney fees to defend themselves. If the case had been turned over to the DA’s office, to begin with, this could have been avoided and both sides would have had a better chance of getting due process.