August 15, 2017; By KyleAndMelissa22 (Own work) [Public domain], via Wikimedia Commons

On the one hand, we have “monument fever” — with an endless array of, not just people, but groups, getting monuments in our public places.  Will future generations feel the need to clean up the National Mall, in order to memorialize heroes of future civic or military battles?

On the other hand, we’re engaged in a process of clean-up now, getting rid of monuments to people who we conclude weren’t that heroic after all.

On the third hand, when ISIS and the Tailban got rid of monuments that they didn’t like, we got outraged.  Is the difference that our “bad monuments” are not old enough to have historical significance?

On the fourth hand, we cheered when statues of dictators were toppled in the Soviet Union and Iraq, and will likely do so again, if someday the Kims lose their hold on North Korea.

What’s the purpose of a monument? And when does it become a “historic site” which should be preserved whether we like it or not?

It seems like an easy call that monuments to military generals and politicians in the Confederacy should go.  From Robert E. Lee to Jefferson Davis to Stonewall Jackson, they supported a rebellion against the United States whose very purpose was the preservation of slavery, much as it was later idealized as a states’ rights and independence movement.

Of course, this shouldn’t be a matter of private individuals, vandals, taking the matter into their own hands, nor of the federal government mandating this, but each local government body should feel free, in a democratic manner, to make that decision for themselves.

As much as the rallying cry of those upset at this turn of events is that these are a part of our history, a monument placed in a city park is not the same thing as a historical site.  Monuments should serve the people, not be fixed and frozen in time.

But there are three complicating factors.

In the first place, we need some kind of standard that differentiates between anyone with a complicated legacy, and those individuals whose primary “claim to fame” is directly and concretely associated with slavery or other Bad Things.

Washington, after all, was a slaveowner, as was Jefferson.  To a certain degree, it’s my understanding that they themselves understood that there was a conflict between their pursuit of liberty and the system of slavery, but were unwilling to break with something that was so ingrained in their way of life at that point.  It seems like common sense to say that only those men who specifically worked to preserve the slave system lose their places of honor — but students at places like Yale and Princeton have rejected this differentiation as they demand that any building named for anyone connected with slavery be renamed (though bloggers claim that students are hypocritical in not asking for Yale to rename itself — since its namesake was a slave trader — lest they lose the “Yale” cachet).

Second, there are some sites where the “monuments should serve the people” formulation doesn’t work as well, because they have become historical sites in their own right.  Chief among these is Stone Mountain, in Georgia, of which Wikipedia says,

Stone Mountain is a quartz monzonite dome monadnock and the site of Stone Mountain Park in Stone Mountain, Georgia. At its summit, the elevation is 1,686 feet (514 m) MSL and 825 feet (251 m) above the surrounding area. Stone Mountain is well-known not only for its geology, but also for the enormous rock relief on its north face, the largest bas-relief in the world. The carving depicts three Confederate figures during the Civil War: Jefferson Davis, Robert E. Lee and Stonewall Jackson.

The carving was done in fits and starts from 1916 – 1972; the mountain itself was owned by the United Daughters of the Confederacy until 1958, at which time it was deeded to the state of Georgia.

And, in the context of all the calls for the removal of any sort of Confederate monument, there are similar calls now for this carving to be sandblasted.  Given that Teddy Roosevelt is the object of protests vilifying him as a “white supremacist,” a blanket “destroy all monuments of Bad People” dictum would involve dynamiting Mount Rushmore as well.

Finally, we should address the “keep the monuments” claims that these are a part of Southern heritage, and that Southerners want to remember these people has having at least attempted to fight for independence.  They may be flat-out wrong, but is it really appropriate, and pragmatic, to run roughshod over them?  The rallying cry of “white nationalists” is that they want to have “white pride” and I imagine that their message becomes much more appealing to young people, when they’re told that their ancestors were racist, had no redeeming qualities, and that their “heritage” contains nothing of value and should produce only shame and repentance.  Could at least some of those statues be replaced, not universally with Civil Rights icons, but with local men and women known for music and the arts, literature, science, or community service, from any time period, not just the 1960s?


Image:; By KyleAndMelissa22 (Own work) [Public domain], via Wikimedia Commons

July 8, 2017


In the news last week:  “The U.S. fertility rate just hit a historic low. Why some demographers are freaking out.

The United States is in the midst of what some worry is a baby crisis. The number of women giving birth has been declining for years and just hit a historic low. If the trend continues — and experts disagree on whether it will — the country could face economic and cultural turmoil.

According to provisional 2016 population data released by the Centers for Disease Control and Prevention on Friday, the number of births fell 1 percent from a year earlier, bringing the general fertility rate to 62.0 births per 1,000 women ages 15 to 44. The trend is being driven by a decline in birthrates for teens and 20-somethings. The birthrate for women in their 30s and 40s increased — but not enough to make up for the lower numbers in their younger peers.

Whether the low teen and 20-something birthrates are due to an increase in the use of highly-effective contraceptive methods, a decrease in the frequency with which they’re having sex, or a more general increase in the motivation to avoid pregnancy, or, rather, exactly how all of these factors play into this development, isn’t clear.  And, to the best of my understanding, this decrease in the birth rates for these younger groups is a new enough trend that we don’t know whether, in the coming decade, we’ll see that these women merely postponed childbirth, or are, on average, having fewer children over their lifetime.  (Note that the underlying CDC report gives a TFR of 1.818, the lowest since 1984.)

But demographers confidently expect the fertility rates to rebound, and here’s something curious:  the WHO fertility rate projections likewise assume that, consistently, for countries with below-replacement fertility levels, those rates will increase, e.g.,

Japan’s current TFR is 1.48.  The WHO projects, in their “medium variant,” that it will increase progressively until it reaches 1.79 in 2100.

Korea’s TFR is projected to grow from 1.32 to 1.78.

Germany, from 1.47 to 1.73.

In North America, Canada is projected to move from 1.56 to 1.78.

Other countries which already have TFRs of slightly below 2 are presumed to stay at about that level, e.g., the US is projected to move from 1.89 to 1.92.

And, well, Sub-Saharan Africa as a whole is projected to move from 4.75 to 2.16.

Now, it’s not a surprise to me that demographers project that high-fertility countries will see decreases in fertility, or that countries that have been reasonably stable and replacement-ish will stay that way.  But I’m curious as to the basis for the assumption that the super-low fertility countries will “naturally” rebound.  Is the expectation that their levels of immigration will increase, and that those immigrants will be more fertile than the native population?  Is there a presumed equivalent to the so-called “Roe effect” — that is, an assumption that the lion’s share of childbearing will be done by larger families, and an increasing share of children will come from larger families and expect to have more than one child?  The WHO site has links to data but no real explanations.

Readers, what do you think?


Image from; creative commons license.

July 4, 2017


My son will be applying to college next year, which means that stories of college students protesting America as a vile, unjust place are particularly likely to catch my eye.  And at the same time, in the run-up to Independence Day, Jake Tapper is tweeting out photographs of American soldiers submitted by followers.  (As it happens, my dad is a vet, though his time in the military was spent happily in Germany, in-between Korea and Vietnam; he was commissioned as a second lieutenant after ROTC in college, used his mechanic’s skills to supervise a vehicle repair depot, and, so far as I can tell, spent his off-hours travelling the country at a time when the salary of a young, single American officer stretched pretty far.)

In the meantime, on Sunday at church we sang the usual patriotic songs, America the Beautiful and The Battle Hymn of the Republic.  And, look, maybe I get too emotional, but I got a little weepy at the latter song.

No matter how much historians (and ordinary Americans) argue about whether the Civil War had as its objective freeing the slaves (or, for the South, preserving slavery), or preserving the Union (or, conversely, independence for the South), it is true that for a meaningful portion of the population (and I’m not going to propose percentages), it mattered a great deal that slavery should end.  Here are the lyrics, per Wikipedia.

Mine eyes have seen the glory of the coming of the Lord;
He is trampling out the vintage where the grapes of wrath are stored;
He hath loosed the fateful lightning of His terrible swift sword:
His truth is marching on.

Glory, Glory, hallelujah!
Glory, glory, hallelujah!
Glory, glory, hallelujah!
His truth is marching on.

I have seen Him in the watch-fires of a hundred circling camps,
They have builded Him an altar in the evening dews and damps;
I can read His righteous sentence by the dim and flaring lamps:
His day is marching on.

And so, too, in the World Wars, and even now in Iraq and Afghanistan.  We talk about American soldiers defending our freedom, but it’s really much bigger than that:  the American Dream that we celebrate on Independence Day is only partly about political independence from Great Britain.  The more important piece is the sacrifice that so many have made in our history for, not our freedom, but the freedom of others.

I mean, sure, there’s the “regular” American Dream of becoming prosperous by dint of hard work, in the model of Thomas Edison and Henry Ford.  But you don’t get teary-eyed about Thomas Edison.

And slavery is a huge stain on our country.  But in the same way as, on someone’s birthday, you don’t contemplate all the times they were really kind of a jerk, and at an anniversary, you don’t dwell on the arguments you’ve had, and at a retirement party, you’d be horrified if the the boss started piling on complaints, so, too, Independence Day is a day to celebrate those ideals, and the people who have worked to put them into practice.


Image:  own photograph.

June 22, 2017; By Tony Webster from Minneapolis, Minnesota (Philando Castile - Falcon Heights Police Shooting) [CC BY-SA 2.0 (], via Wikimedia Commons Minnesota Bureau of Criminal Apprehension (BCA) investigators process the scene of where a St. Anthony Police officer shot and killed 32-year-old Philando Castile in a car near Larpenteur Avenue and Fry Street in Falcon Heights, Minnesota, on July 6, 2016. Photo: Tony Webster /

In the news last week:

A jury found St. Anthony police officer Jeronimo Yanez not guilty Friday in the fatal shooting of Philando Castile, whose livestreamed death during a traffic stop stunned a nation.

And yesterday the dashboard camera video was released.  I haven’t watched it, and don’t plan to — it suffices that writers with varying politics report that it horrifies them.

But then why didn’t the jury convict Yanez?

We have to look further than “white supremacism” or “racism” — it was not a matter of government officials refusing to bring charges, but a jury that refused to convict, a jury that split 10 – 2 (with the two black jurors in favor of the not guilty finding).

Was the jury racist?  Given that the police officer was not white, either, it seems unlikely that they gave him a pass on account of his color.  Did the prosecution throw the case, out of indifference?  I haven’t heard any suggestion to this effect.

So far as I can tell, there are three possible reasons:

Either the jury (as representative citizens) gave Yanez, as a police officer, more benefit of the doubt than they would an ordinary citizen, or

The prosecutors overcharged him, so that his actions, however heinous, did not meet the definition of the crimes he was charged with, or

In yet another instance of “you weren’t there”, the jury, hearing all the testimony and evidence, and not just what makes it onto the news, simply believed Yanez’s defense that he believed his life was at risk.

Or, rather, I think all three of these factors were in play.

To be sure, from everything I understand, however worried he might have been that he’d pulled over a thief (and I haven’t seen any reporting on whether he had grounds for that suspicion), he should have managed the situation better.  Either the protocol on what to do, as a police officer, when engaged in a traffic stop with someone who discloses they have a weapon, is deficient, or he did not follow that protocol properly; it seems to be the latter but it’s not certain to me.  (How do you ensure that the driver doesn’t reach for a weapon?  Apparently, according to comments, by telling the driver to keep his hands on the steering wheel and not giving him any contradictory commands.)  To be sure, the defense says that Castile contributed to his death by not following instructions correctly, and his being high might have meant that he was compromised in his ability to do so, and the feeling of threat was heightened for Yanez.  But from what I’ve read, Castile was acting with the intent of cooperating, and Yanez did a poor job of providing the instructions, telling him both “don’t reach for the gun” and “please get your driver’s license out.”

What does the law say about a case where you believed your life was at risk, but you really shouldn’t have, because you were being overly panicky?  And what does the law say about shootings where, through mismanagement of the situation, one creates a situation in which one believes one’s life is at risk?  I don’t know.  One presumes that the prosecutor ensured that the jury was instructed properly about the law, whatever it has to say in that situation.  Did the jury reach beyond their instruction due to sympathy with Yanez?  Again, don’t know.

So what next?

Is the objective, at this point, still to punish Yanez?  That goes against the double jeopardy principle of our criminal justice system, and is itself, then, unjust.  (Yes, in the Rodney King case, the police officers were tried by the feds on “civil rights” charges after they were acquitted by a jury, but I don’t think there are grounds for that here; no one is suggesting that Yanez had any anti-black animus, just that he was inappropriately panicky and/or failed to manage the situation properly.)

After all, a jury assessed whether Yanez’s actions merited the punishment called for by the charges at hand.  They were not asked to identify what was “fair” or what would make everything right, or to in some way, make up for every other victim by their decision here.  They were asked, does this man belong in prison for what he did, and for the length of time that a “guilty” determination would produce?

Remember back when I wrote about Korea?  One of the examples that the author provides about the differences between American and Korean culture is that there was a great deal of uproar over the unintentional death of Korean civilians due to an accident in which a U.S. military vehicle was on a public road.  The people recognized that the driver was not at fault and did not personally bear guilt, but still wanted the soldier(s) to be tried (even if ultimately freed from prison) to assuage the anger and grief that the people felt over these deaths.  Their sense of justice meant that the soldier’s imprisonment would serve to make things right.  Our belief in justice says that people should only be imprisoned as appropriate punishment for criminal acts that they have been found guilty of by a jury assessing the facts of the case.

And, while Yanez shouldn’t be given a “pass” because he’s a police officer, neither should he be held to a higher standard, in a criminal court at least.

Is the objective, then, to punish future Yanezes?  That would be a start, I suppose, by having some sort of review process to ensure that (a) the prosecutor does not overreach for political reasons (which it seems to me has happened in other recent cases — perhaps readers can remind me of them), and that (b) the jury properly understands all applicable law and is well-instructed to ensure that they hold police officers to the proper legal standards.

But if the objective is to prevent future deaths, then that’s a different story.  It involves better training, and more time spent on training (which means more money, because you’re hiring more police to make up for less time spent on the beat), and it involves better screening, to ensure that police officers are able, in terms of their psychological make-up, to make it through an encounter such as this  without shooting.  This is a much more challenging task.

And — final thought:  what we don’t know, or at any rate what I don’t know, is how rare or common a Philando Castile incident is — by which I mean, relative to this one case, how many other traffic stops proceed smoothly, with the driver disclosing “I have a gun,” and the officer managing the situation in such a way as to (a) not shoot and (b) not create the circumstances in which he felt he needed to shoot.  Having some successful counter-examples would help us understand how to ensure these traffic stops work as they’re supposed to.


Image:; By Tony Webster from Minneapolis, Minnesota (Philando Castile – Falcon Heights Police Shooting) [CC BY-SA 2.0 (], via Wikimedia Commons
Minnesota Bureau of Criminal Apprehension (BCA) investigators process the scene of where a St. Anthony Police officer shot and killed 32-year-old Philando Castile in a car near Larpenteur Avenue and Fry Street in Falcon Heights, Minnesota, on July 6, 2016.
Photo: Tony Webster /

June 3, 2017


What follows is, ahem, a load of covfefe, but it’s something I was thinking about subsequent to my post yesterday about American culture and international agreements.

If you think about “diversity” and “multiculturalism” and reports of the same at schools and universities, you probably land on things such as:

  • Celebrations of the traditional music, dance, and food of a given non-American society or an American subculture,
  • Acceptance/tolerance or welcoming/celebration of people with different skin color, facial features, and accents (whether non-native speakers or speakers of so-called “ebonics”), different styles of dress, or with different religious practices, including religious clothing, food restrictions, and the like, relative to white, European-origined people of American suburbs.
  • Elimination of practices that non-mainstream-cultured people find upsetting, such as asking “where are you from?” of an individual with Asian ethnic origin but a plainly American accent, or expecting a black student to be able to speak for all students of her race/ethnicity on a topic (or, conversely, such a student claiming to be able to do so).

But deeper down, we all, as Americans, participate in an American culture.  And here I have pulled Riding the Waves of Culture, by  Fons Trompenaars and Charles Hampden-Turner, off the bookshelf.  This was a book that was given to us as reading material when we were preparing for our expat assignment in Germany, and it was really eye-opening in terms of seeing that “culture” is a lot more than just traditional clothing, dances, and food.

I’d already experienced some of this even just with my in-laws:  the fact that following the Rules of Polite Behavior was very important to them — that is, greeting everyone when you walk into a room, with handshakes.  But Trompenaars identifies (and connects up to international business, but the lessons are greater than that) five dimensions of “how we related to other people”:

  1.  Universalism vs. particularism, or rules vs. relationships
  2. Communitarianism vs. individualism (the group vs. the individual)
  3. Neutral vs. emotional (the range of feelings expressed, e.g., in business dealings)
  4. Diffuse vs. specific (the range of involvement, that is, how “personal” the business relationship is)
  5. Achievement vs. ascription (how status is accorded, that is, your own record vs. your credentials, connections, etc.)

He also identifies additional cultural differences in how we view time — not just the bit about some cultures being “chronically late” but whether one thinks primarily in terms of everything going in its order vs. thinking in a “polychronic” way:  “there is a final, established goal but numerous and possibly interchangeable stepping stones to reach it”.  In addition, cultures differ in how they think of the past, and the short vs. long-term future.

And cultures also very in terms of how we relate to nature/the natural world and how we perceive our ability to control the world around us, vs. fate controlling us.

This is probably not something unique to Trompenaars; it’s a book I have on my bookshelf but I presume there are others like it.  I find it fascinating, though, that things that we, as Americans, think of as “perfectly normal” simply operate quite differently in other cultures.  For instance, he describes different cultural norms with respect to contracts:  Americans want to develop extensive lawyer-approved contracts that specify every contingency, and expect that, having done so, the contract will be abided by in all circumstances; a country such as Japan will expect that one builds up a personal relationship with one’s business partners, and then, having established trust, the particulars of the contract are less important because one assumes that one’s partner will do the “right thing.”

Now perhaps you’ll say:  “stop right there.  ‘Multiculturalism’ means exactly accepting that we do not all share the same culture, even in this deeper sense.”

But if that’s the case, we’re in serious trouble.

The well-being of our society depends on our ability to integrate newcomers into our culture, in this deeper sense, even if superficial things like community celebrations differ.  The well-being of any society depends on this ability to function with the same set of common expectations about the way we relate to each other.  To take some superficial examples:  imagine a deli counter in which half the people dutifully take their numbers and wait their turn, and the other half barge up and expect that the loudest get served first.  (Side comment: as much as Germans are known for orderliness, I tried to avoid the deli counter if at all possible, going to stores that had packaged lunchmeat instead, because they did not have a take-a-number system and it seemed that it was always the turn of whoever made themselves heard.)  Imagine a meeting where half the people show up on time, the other half an hour late.  Imagine a corporate culture in which half the people expect that it’s perfectly normal to hire and promote based on nepotism and cronyism and the other half expect to get hired and promoted based on achievement (yes, there are instances of nepotism/cronyism but that’s considered objectionable, not “business as usual,” by the large majority).

Consider the protests that flare up periodically at universities — where it seems to me that what’s going on is a clash of cultures at a much deeper level, in which universities have evolved into a subculture (perhaps influenced by non-American cultures?) which is at odds with mainstream American culture.  If you look at Trompenaars’ list above, you can identify a clash in many of these aspects of culture:  protesters reject the idea that individual achievement matters vs. perceiving the world as having been giving advantage to white men and jostling to take that advantage for themselves; they see things with a lens of group identity vs. individualism; they “take everything personally”; they reject the notion that the system operates on notions of “fair play” and “the rules are the same for everyone”; and they look backwards, to an unjust past, rather than being generally forward-looking.

Now, in most of these contrasts, my instinctive reaction is “of course, the American way is the best.”  Your own personal ability should matter more than who your father was or what school you attended, for instance — though I can also see that the American tendency to want everything in black-and-white and spelled out in The Rules has its own problems.  And I presume that, were I Japanese, or Korean, or whatever, my perception would be quite different.

But my point is that, if we disrupt our culture in this deeper sense, if we perpetually have to manage these sorts of conflicts, that aren’t really manageable within a culture, then we’re in a covfefe-load of trouble.


Photo:  own image.

June 2, 2017

“I hope this e-mail finds you well.”

How many of your e-mails start that way?  Probably not many, unless, like me, you work with colleagues across the globe, in which case, quite a few.  It’s the e-mail equivalent of a cultural imperative that all interactions, and, in particular, business dealings, must start with extended inquiries into the health and well-being of each party and their families.  And it’s just one of a number of ways in which our American culture differs from many others, differences that get lost in the emphasis on the notion of multiculturalism — that is, the idea that we are a nation of many cultures makes us blind to the fact that there is, deeper down, an American culture.

And it seems to me that the reason why the United States has rejected the Paris Accord on climate change is a cultural one.  Oh, sure, you might say that the rejection is because Trump is evil and wants to destroy the planet in the name of short-term profits for American businesses, but, when Obama signed the agreement in 2016, he did not submit it to the Senate for ratification.  To be sure, the Senate was GOP-controlled so doing so wouldn’t have had any practical effect, but neither did Obama make any push for a binding agreement during the years when the Democrats controlled House, Senate, and the presidency.

Beyond which, it’s my understanding that the agreement itself is fundamentally a declaration of “good intentions.”  As described by The American Interest and Commentary magazine, the targets for reductions are set by countries themselves, and can be changed at any time, without any sort of external enforcement mechanism.  Secondly, the Green Climate Fund does not have (so far as I can tell) any power to compel any country or other entity to had over the cash that’s supposed to go to poor countries (who, buy the way, have nothing to lose in signing the agreement, since they are intended to be recipients of the funds, not contributors).

And, indeed, the United States is nearly alone, in the company only of Syria and Nicaragua, in not signing the agreement.

But that’s not the only such case.

The United Sates is the only country to have not ratified the UN Convention on the Rights of the Child, which Clinton signed in 1995.

The Convention on the Elimination of Discrimination Against Women (SEDAW)?  Only the Holy See, Iran, Somalia, Sudan, Tonga have not signed it; the U.S. and Palau are the only ones who have signed but not ratified it (the U.S. under Carter, in July of 1980; see Wikipedia).

The Convention on Rights of Persons With Disabilities?  Here’s that list:  US, Libya, Uzbekistan, Kyrgyzstan, Tajikistan, Chad, Cameroon, Botswana, Solomon Islands, Fiji have signed but not ratified, South Sudan, Eritrea, Somalia, and Equatorial Guinea have not signed.  (Per the UN site.)

Why has the U.S. not ratified these agreements?

It’s easy to find individual explanations:  the Convention on the Rights of the Child takes away parental rights, might prohibit homeschooling, etc.  CEDAW could prohibit even the moderate abortion restrictions we have in the U.S., eliminate single-sex schools, would require quotas on the hiring of women, and so on.

But consider that Saudi Arabia, despite its notorious “guardian” system, has ratified CEDAW.  North Korea, which imprisons whole families, has ratified the Convention on the Rights of the Child.  And if I weren’t already running out of time to start my workday, I’d find a really good example of a country which does really dastardly things to the disabled.

And beyond these specific examples, every other country that has signed it has taken the view that, to whatever extent they disagree with any aspect of the application of the Convention, they’ll go their own way, and it’s no big deal.  (With respect to the Paris Accord, I would guess that the other signatories had a similar view:  we’ll make the decisions that we think are best for our people, and try to get as much benefit out of the deal as possible.)  What, then, is the point of signing these conventions?  To some degree, it’s a matter of taking an action to further their resolve to Do the Right Thing; in other cases, it’s purely about asserting on the world stage that you’re just as good and righteous as everyone else, no matter the truth at home.  To some degree, there is also the hope that any given nation that signs/ratifies will then have greater moral standing and a specific platform from which they can effect change in nations that are deficient in their treatment of children, women, the disabled.

Why didn’t the U.S. ratify them?  In the same way as we don’t ask whether every member of your family is doing well before getting down to business.  We focus on the practical, the concrete, and don’t see the same value in the aspirational and symbolic (or, as critics would say, the farcical claims of success) — we ask, how, concretely, will this ratification affect the people at home?  Consider, too, the fact that in certain cultures (I’ve read this of various Asian countries), Americans have trouble doing business because, culturally, their counterparts just will not say “no”, leading the Americans to think they’ve made the deal, when, in reality, our American culture of a straightforward “yes” and “no” is not a part of how they do business.  We want an Agreement, a Treaty, to have a clear path, a practical impact, and enforceable consequences, when much of the rest of the world just doesn’t care.

(It reminds me a bit of two views of marriage:  the U.S. says the equivalent of “marriage is a serious binding commitment”; the ROW says, “marriage is a piece of paper that’ll show everyone else how much we love each other, make everyone like us more, and maybe get us some benefits.”)

And I’m not saying American culture is right or wrong on this point.  But it’s different.  And this has nothing to do with climate change.



May 5, 2017

V0050236 A fist-fight between Lord Brougham and Lord Melbourne as Pea Credit: Wellcome Library, London. Wellcome Images A fist-fight between Lord Brougham and Lord Melbourne as Peachum and Lockit. Coloured lithograph by H.B. (John Doyle), 1837. 1837 By: John DoylePublished: 22 October 1836 Copyrighted work available under Creative Commons Attribution only licence CC BY 4.0

I am outraged, outraged, I tell you, at the fact that Donald Trump seemed to think that Andrew Jackson was alive at the time of the Civil War.  Also, Sean Spicer denied the Holocaust, and you can tell, just by looking Steve Bannon in the eyes, that he wants to kill all the babies in the world.

Also, the Pope is a bad, bad man because he’s a Marxist who wants to change church teaching on marriage and contraception, and probably, late at night, watches Benny Hinn.

But that’s OK because Christians are all fools since they think that if they say a prayer to their magical sky god, he will send dollar bills raining down on you, and they’re also evil because they want to imprison everyone who doesn’t agree with them.  If you’re not careful, they’ll destroy the Capitol and create a theocracy where minorities are sent to toxic wastelands.

But it’s a good thing that Trump won the election, since otherwise Clinton would have turned the United States into Venezuela and we’d be standing in breadlines all day long.

You’re all sharing this post with all your friends, right?

No?  OK, maybe I have to find an anecdote or two that I twist into something much more nefarious than it is.

How ’bout the fact that the upcoming Murder on the Orient Express film doesn’t contain any Asian (“Oriental”) actors?

That’s what film critic Rebecca Theodore tweeted about on Wednesday — see the report at Ace of Spades HQ.  Upon being reminded that it does not take place anywhere near the Far East, she then demanded that the filmmakers should have rewritten Agatha Christie’s story to replace Germans and Brits with Asians of some sort or another.  Which sort of misses the fact that filmmaking in English-speaking countries, when making period pieces anyway, tends to draw from the literature of those countries, which tends to have white characters — but that there are vibrant film industries in such places as Korea, China, the Philippines, Japan, etc., whose period pieces draw from their histories and whose actors are, yes, of “Asian” ancestry.  (See my old post observing that a CNN slideshow purporting to be a list of “Asian-American” actors included actual Asian-Asians who happened to be in American films.)

Or how ’bout the fact that menstrual period-tracker apps — gasp! — tend to be flavored with pink!

Yes, that’s from a recent Atlantic article whose author must have been up against a deadline, and came up with “The Awful Pinkness of Period Apps.”

Now, there are plenty of reasons to complain about the proliferation of “period tracking” apps, and the author reasonably notes that “the apps’ predictions of upcoming menstrual cycles might not account for variations due to menopause or stress” — which is true enough but also fairly trivial, and means that unless you’re cycle is regular (or artificially regularized by contraceptive pills), such an app, without observing ovulation signs as well, is pretty useless for the purpose of predicting your period,  But that’s a side comment.  What really matters to the author, citing a study, is that “Aside from being aesthetically offensive, the design reflects a certain thoughtlessness—as if slapping some pink flowers on an app is what you do to appeal to women.”

As it happens, one of the commenters, bdavi52, looked at the original study in more detail, and wrote,

Oh, please.
10-20-30 apps (most of them free) to provide ‘period tracking assistance’…. millions of users, spread across hundreds of countries (Clue, one of the more popular, reports 2M users in 180 countries) … and this ‘study’ examines 2000 on-line reviews, surveys 690 self-selected respondents, and interviews 12.

And then tells us that 59 people “considered femininity (in design) a negative trait”.

Are we supposed to take this seriously?

Those are silly examples, but how ’bout one with more serious consequences?

Rebecca Tuvel, an assistant professor of philosophy at Rhodes College in Memphis, published an article on similarities between transgenderism and “transracialism” in the Spring 2017 edition of Hypatia, a feminist philosophy journal.  The article examined persons who believe themselves to be transgender, that is, to identify as being of the opposite sex, and people (or a person, that is, Rachel Dolezal) who identify as being of a different racial origin and wish to live their lives living in that other culture and wish to be recognized as being of that other racial origin — and looked at both how they perceive themselves and how others perceive them, and, ultimately, how the philosophical arguments for accepting one or the other claim line up against each other.

In an article at, Jesse Singal details the “witch-hunt” against Tuvel for this article.

The journal has already apologized for the article, despite the fact that it was approved through its normal editorial process, and Tuvel’s peers are busily wrecking her reputation by sharing all sorts of false claims about the article that don’t bear the scrutiny of even a single close read.

The biggest vehicle of misinformation about Tuvel’s articles comes from the “open letter to Hypatia” that has done a great deal to help spark the controversy. That letter has racked up hundreds of signatories within the academic community — the top names listed are Elise Springer of Wesleyan University, Alexis Shotwell of Carleton University (who is listed as the point of contact), Dilek Huseyinzadegan of Emory University, Lori Gruen of Wesleyan, and Shannon Winnubst of Ohio State University. (Update: As of the morning of May 3, all the names had been removed from the letter. A note at the top of it reads “We have now closed signatories for this letter in order to send it to the Editor and Associate Editors of Hypatia.”)

The complaints of the letter?  Such horrors as having used the label “transgenderism” (apparently now on the banned list; who knew?), “deadnaming” a person — that is, she referred to the fact that Caitlyn Jenner once had, and gained fame under, the name Bruce Jenner, and further complaints that stretch credulity even further; according to Singal, these complaints are not supported by any reasonable reading of the article in question.  Presumably the real complaint was that Tuvel dared, even for purposes of scholarly analysis, consider ideas that are uncomfortable/taboo to the political orthodoxy on campus.

Note that Tuvel is an assistant professor, that is, she does not have tenure.  For all the proclamation of universities as places of free inquiry, one suspects that her shot at tenure is over as well.

So the bottom line is that I, too, need to find just the right scandalous item to share with all of you, find the right witch to try, and I’ll be on my way to fame and fortune.


Image:  V0050236 A fist-fight between Lord Brougham and Lord Melbourne as Pea
Credit: Wellcome Library, London. Wellcome Images
A fist-fight between Lord Brougham and Lord Melbourne as Peachum and Lockit. Coloured lithograph by H.B. (John Doyle), 1837.
1837 By: John DoylePublished: 22 October 1836
Copyrighted work available under Creative Commons Attribution only licence CC BY 4.0

February 14, 2017

Pyongyang;'s_Party_01.jpg; By Joseph Ferris III (Flickr: Pyongyang, North Korea) [CC BY 2.0 (], via Wikimedia Commons

This is a movie, not a book.  Here’s the premise:  a Russian filmmaker, Vitaly Mansky, gets permission from/invited by the North Koreans to film a movie there, a “day in the life” that purports to be the experiences of a typical family as the 8 year old girl joins the Children’s Union (Young Pioneers-equivalent).  The Norks dictate the script and checked the footage to ensure that only the acceptable bits were retained, but they filmed before and after the “official” scenes, and, taking advantage of the Norks unfamiliarity with the technology, they secretly copied over this footage and produced an eye-opening film.

The film does not show the material deprivation of the North Koreans, at least not to a great degree; of course he was constantly supervised by his minders, and even if the filmmaker had more discretion, only the privileged live in Pyongyang in the first place.  But it does show their spiritual deprivation, through countless details, big and small.

Here’s a scene:

Zin-mi comes into the classroom, says to her friend, “you were the first one, again!” and then they happily start washing desks and singing a song of praise to Glorious Leader.

The next scene, in the classroom:  the teacher instructs the children in the love that Glorious Leader had for his people, by fighting the Japanese even as a child.  The children repeat the lesson, over and over again, beaming with pride.

The scene beforehand:  those same children, all hunched over the radiator before class starts, warming themselves.

The girls all have identical braids, and the classroom is all girls.  Is this truly how North Korean schools are set-up?  Or did the filming supervisors think that a classroom of all-girls would be cuter?  In other scenes, children march around the (grey, concrete) schoolyard.  A caption says that they never actually saw children leave the grounds, and they believed that the children lived there, and their parents lived at their workplaces.

Another scene:

a family meal.  The table is impossibly full with food, and mother, father, and daughter sit down to eat.  Father says, “Zin-mi, eat some kimchi.”  Daughter says, “oh, yes.  Kimchi is very healthy for you.  It prevents aging, and cancer.”  They repeat the scene multiple times.  One wonders whether they ever got to eat the food on the table.

Another scene:  the garment factory where the father supposedly works.  His actual job was a “print journalist” but they decided to make him an engineer.  They show a meeting with several of the workers in which he pretends to advise them on a quality-control problem, and the workers nod their heads in unison and then announce their gratitude at his solution (a few garbled nonsense words, something along the lines of “we will add 6, 8 and 20 mm widgets to solve the problem”).  Later on the factory floor, the supervisor announces her joy at the fact that they reached 150% of the day’s production quota, then gives a bouquet to their oldest and longest-serving employee, who in turn announces that they could not have done it without the wisdom of their engineer.  This scene is repeated multiple times, with the North Korean crew exhorting the women to smile and be enthusiastic, and with the script changing from 150% to 200%.  In the meantime, the camera wanders and we see the real production board with markers at 30 – 40%.

The mother, too, is pulled away from her real job as a cafeteria worker to a staged job in a soy milk factory, where she and the other workers are told by their supervisor how inspiring it is to produce many different kinds of soy milk for all of the people of North Korea — a scene, again, repeated multiple times, with exhortions for the workers to be more cheerful.

Later, Zin-mi gets a dance lesson, with the instructor having her repeat the steps over and over again until she looks visibly exhausted.

Another scene at the school:  the children sit in a small auditorium, listening to a veteran telling tales of his military service against the cowardly Americans.  Zin-mi struggles to stay awake.

There is one of those mass-performance dances in a public square, that we see Zin-mi watching from her apartment.  After the dance is over, the dancers are dismissed and leave for home.

We see people riding the bus, the subway, walking to their jobs, bowing as they pass the mural of the Kims — all of which happens in silence.  Do the people always stay silent when in public, or were they instructed to do so by the filmmaker’s North Korean supervisors?

Praises of Dear Leader, and Great Leader, and whatever other titles, are everywhere, as are songs in their honor — and everyone, man, woman, and child, wears a Kim pin.  We catch bits of TV in the background, and it’s always war movies.

Is the film a shining example of filmmaking?  It’s rather slow in parts, and shows a bit too much of some tedious scenes, such as the filmmaker walking up flights of stairs (was this supposed to be symbolic of something?), or one scene after the next of pedestrians silently walking in the grey city.

But what got me is this:  for the North Korean scriptwriters to have plotted out these scenes for a Western audience suggests that they truly thought they would be believable to the audience — that children would spontaneously sing songs of praise for the Kims, or that factory supervisor would speak in patriotic terms of their joy at serving the people.  It’s such a thorough indoctrination that George Orwell could have been describing the country.  And I’ve read elsewhere that when North Koreans end up in the South — that is, via the Underground Railroad through China — even though they can intellectually grasp that they’ve been lied to the whole life, they still can’t shake that emotional reaction that, surely, the Kims themselves are not at fault.


Image:  Pyongyang;’s_Party_01.jpg; By Joseph Ferris III (Flickr: Pyongyang, North Korea) [CC BY 2.0 (], via Wikimedia Commons


December 6, 2016; By ParentingPatch (Own work) [CC BY-SA 3.0 (], via Wikimedia Commons

How to Make Abortion Rarer” — that’s the title of an article in this week’s Economist.  And — no surprise — their solution if for women everywhere to “see the light” and start using hormonal contraception rather than condoms, and for governments to make these drugs and devices more widely available, and free of charge.

Before they get to that, they claim that legal restrictions on abortion have no effect, based on a Lancet study that claimed to have determined that “abortion is as common in countries where it is illegal or allowed only to save a woman’s life as it is in those where it is provided on demand.”  (This study is paywalled; here’s a write up from the BBC and another from the Christian Post; near as I can tell, from memory as well as these articles, it is far too simplistic to say, “abortion restrictions have no effect” because there’s so much else going on with poverty levels, culture, etc., that you can’t simply classify every country in the world into two groups and draw conclusions from it.)  They point to one country in particular, South Korea, where abortion is nominally illegal but widely available.

The article then cites various countries in which there is a strong cultural dislike of hormonal contraception.  The Greeks, reportedly,

commonly believe that the pill and other hormonal contraceptives cause infertility and cancer. They also distrust intrauterine devices (IUDs), possibly because they have been taught that tampons are unhealthy.


In South Korea fewer than 5% of women use the pill because they think it is harmful to ingest artificial hormones.


In plenty of countries, including Ireland and Russia, more than a third of married couples who use contraceptives rely on condoms. In Japan that share is an astonishing 90%.

despite the fact that condoms have an 18% failure rate.  (That is, 20% of users will fall pregnant within a year — this is slightly more pessimistic than the WHO figure of 15%.)

So step one in the Economist plan is to persuade women to overcome their cultural resistance to hormonal contraception.

Step two is to make the contraception free of charge.  They write:

A Russian woman who wants an IUD has to pay for the device and for the appointment to have it inserted. Abortions in state hospitals, by contrast, are free. Half of the 16 European countries in the IPPF review provide no reimbursement for contraceptives; none fully covers all methods. Even in some countries with generous social welfare systems, including Germany and Italy, women have to pay for contraceptives, no matter how low their income.

This is despite contraception being cheap for governments to provide. England’s National Health Service (NHS) offers every type of contraceptive free to everyone (better-off people must pay part of the cost for other prescriptions). Its buying power means it can pay less than £10 ($12.50) for a year’s supply of the pill and just £18 for an IUD that prevents pregnancy for five years. England has one of the world’s highest rates of contraceptive use.

But here’s where their argument fails:  they include a graphic with abortion rates in selected countries.  There are no numbers, so you just have to estimate from the graph’s legend, but the abortion rate for the U.K. looks to be about 16 per 1,000 women of childbearing age.  The U.S. is slightly higher on this graph, maybe 17.  And Germany, which the Economist reports require women to pay out-of-pocket?  Their rate is about 8.


Image:; By ParentingPatch (Own work) [CC BY-SA 3.0 (], via Wikimedia Commons

October 3, 2016; Eva Rinaldi [CC BY-SA 2.0 (], via Wikimedia Commons

Maybe not.

That is, according to a USA Today article, the Census Bureau is proposing adding a new racial category to the census, to cover Middle Eastern & North African-origin people, who up to now had been defined as “white.”  According to the article,

Under the proposal, the new Middle East and North African designation — or MENA, as it’s called by population scholars — is broader in concept than Arab (an ethnicity) or Muslim (a religion). It would include anyone from a region of the world stretching from Morocco to Iran, and including Syrian and Coptic Christians, Israeli Jews and other religious minorities.

But the Census Bureau, which has been quietly studying the issue for two years, also has gotten caught up in debates about some groups — such as Turkish, Sudanese and Somali Americans — who aren’t included in that category. Those are issues the White House is trying to resolve before adding the box on 2020 census forms.

But, of course, if you think about it, it’s rather odd that Turks wouldn’t be included if Syrians or Iranians, say, are (the Turks are hardly European!), and the Turks and Armenians are neighbors, after all — Armenia may be its own country now, but the Armenians killed in the Genocide, and those who fled in its aftermath, were an ethnic minority in Turkey.  Hence, it doesn’t seem a stretch to say that Kim Kardashian, of Armenian ancestry, could be declared non-white.

(And what about the Kurds?  And Jews now living in Israel who had immigrated from Europe — but whose ancestors, millennia  ago, left the Middle East?  Or Jews now living in Europe, but, of course, with ancestors from the Middle East?)

And why are they making this change?  For a variety of political reasons — in order to count Middle Easterners and Arabs, to determine whether a congressional district needs to be gerrymandered on their behalf, for example, or to determine if they’re being discriminated against, or measure their health outcomes or income relative to other groups.  And for a feel-good reason, because many such individuals don’t like being labelled as “white.”

The impact?  If nothing else, all those predictions about whites becoming a minority — that is, less than 50% of the population — come true a lot sooner.  But this reclassification is likewise an indicator of the ultimate meaninglessness of these categories, and the fact that they are all about politics, not anything that has any real “truth” behind it.

But what’s peculiar about this article in particular is that this is actually old news — or, at least, Pew reported that the Census Bureau started the process of making this change over a year ago.  And what’s also interesting is that they are, according to this older article, no longer using the word “race” at all, just asking, “Which categories describe [the respondent]?”  Those categories are:

  • White
  • Hispanic, Latino, or Spanish origin
  • Black or African American
  • Asian
  • American Indian or Alaska Native
  • Middle Eastern or North African
  • Native Hawaiian or other Pacific Islander
  • Some other race, ethnicity, or origin

Which does’t remedy my old gripe that “Hispanic” really ought to be renamed and ought to be a matter of whether your origin is (primarily) that of the indigenous peoples of Latin America, not asking about the language of your country of origin.  I suppose the new question seems to have eliminated the requirement, to be “American Indian,” that one be registered with a tribe, since the sample language in the Pew article suggests that Mayan or Aztec are possible “American Indian” categories, so that a Guatemalan, say, who identifies primarily as indigenous and not as a Spanish speaker, would perhaps be expected to check “American Indian.”  (Though it’s hard to believe someone would do so.)  But what about the typical immigrant from a Mexican village, of mostly indigenous ancestry?

Anyway, it’s all very silly.

Yes, there are historic uses of the concept of “race” in the United States:  “black” meaning someone with dark skin color, from subsaharan African, “Oriental” referring to Koreans, Chinese, Japanese, and others with an “Oriental” appearance with characteristic eyes.

And the Census Bureau has constantly tinkered with the adjustments to its labels — see this 2012 Slate article, which describes old categories such as octoon, and incidentally observes that the “Hispanic” category can be selected by those with a strong “Hispanic” identity as well as those who feel no attachment at all but are just trying to follow the instructions.

But it does make the whole concept of the United States becoming “majority minority” pretty irrelevant, if that’s the case only because the Definers of who counts as a “minority” determinedly preserve and expand that category when assimilation and intermarriage would have otherwise moved people steadily into the “white” label.

My preference?  Re-write the question as follows:

From what major geographical region do you or your ancestors originate?  If your ancestors have themselves migrated from one region to another in the recent past*, use their original region.  If your ancestors have come from multiple regions, use the predominant one.

(*By which I mean Argentines whose ancestors hailed from Germany, or South Africans with Dutch ancestry, or Kenyan nationals with Indian ancestry, for example; you might have a better way of phrasing this than “recent past.”)

And then you can provide choices such as Europe, North Africa, Subsaharan Africa, Middle East, Asia, North America, Central/South America, and the Pacific Islands.  Heck, you can follow this up with a second box for people to mark their “secondary ancestry” in the case of biracial folks.

That gives us meaningful information, without keeping the Census Bureau in the business of defining and perpetuating concepts of “race.”


image:; Eva Rinaldi [CC BY-SA 2.0 (], via Wikimedia Commons

Browse Our Archives