MISLEADING MATH: I noted below how difficult it is to use statistics responsibly. It’s even more difficult to use statistics in a way that appears responsible to people with sharply differing views on the issue under examination–and that’s not just because everyone likes to discredit facts that undercut his beliefs. It’s a problem inherent in the nature of social-science statistics.
The basic problem is that all social-science research requires, before it can even get started, a series of assumptions. For example, soc-sci studies often control for a slew of variables; and choosing which variables to control for requires you to make choices and hypotheses that others may not agree with. If someone disagrees with your assumptions, she’s likely to disagree with your use of statistics. Sometimes, you can agree with her that your initial assumptions were sloppy–but even that agreement must be based, typically, on other assumptions that you share with her.
Some examples should help clarify matters: Some studies (on, say, the effects of domestic violence on children; or the rate of incorrect use of condoms; or whatever) lump married couples and cohabiting couples together. This rests on the assumption that marriage and cohabitation are similar enough that the statistics-gatherers don’t need to bother separating the two. But, as should surprise no one who thinks about it for a few minutes, married couples and cohabiting couples don’t tend to look that much alike in all kinds of ways, from how much of their incomes they merge and how they divide household tasks to how often they have sex to how likely the man is to abuse the woman’s child. So stats that don’t separate the two groups may be making misleading generalizations. (For example, a study might conclude that women living with men are more likely to be injured than women living alone, when the actual breakdown is that women living alone are less likely to be injured than women cohabiting with men but more likely to be injured than married women. Sorry, that’s inherently a somewhat sloppy example, I don’t want to go do real research because a) my books are at the apartment and b) this is a blog and I’m lazy. I hope you see the problem I’m trying to illustrate though.)
Other assumptions rear their ugly heads when soc-sci types try to come up with models for how future events will play out. I recall reading one paper that purported to show that the “Mexico City” policy (withholding US funds from organizations that, as part of their activities, perform abortions, refer for abortions, or promote legislation lifting restrictions on abortion) would cause more abortions. If my rather vague memory serves, the paper (which may have been from the Alan Guttmacher Institute, ordinarily a pretty responsible source, but it occasionally slips into abortion-rights advocacy that muddles its analyses) basically took surveys from women in many different countries, in which they had stated whether or not they felt that contraception was a pressing need, and combined that with statistics on unplanned pregnancies in those countries. (If this wasn’t the exact setup of the paper, it really doesn’t matter–again, I’m trying to illustrate a point about how much underlying belief-structure most social-science data rests on. But I read the paper over a year ago. I’m not trying to talk about this specific paper; I’m using my vague memory of it because it’s easier than making up an example out of whole cloth.) Anyway. The paper claimed that since various percentages of unplanned pregnancies ended in abortion (varying by country), more access to contraceptives would lead to fewer unplanned pregnancies and thus to fewer abortions.
Here are only a few problems with this argument: 1) It assumes that no funding arrives to replace the US government funding. It also assumes that the organizations in question (say, International Planned Parenthood, or Marie Stopes International, etc.) don’t transfer funds from other operations so that they can promote contraceptives. It assumes that there are no extra fundraising drives (probably using the Mexico City policy as a rallying point!).
2) It relies partly on surveys, generally not the greatest source of information. People tell surveyers what they think the surveyers want to hear; this is more likely to be true when there’s a significant gap between the power or privilege of the interviewee and the interviewer. We don’t know under what conditions these “Do you need more contraception?” conversations took place, so it’s impossible to judge to what extent women might have felt intimidated by the interviewer. We also don’t know how these options were presented–was it, essentially, Would you reject a basket of condoms if it fell out of the sky? or, Do you feel like you need more education about reproductive health, how you can space your pregnancies, and how to prevent unplanned pregnancy? or, Do you want contraceptives more than you want other things, like water purification or small loans? When people are asked to rank their needs, you typically get a better sense of what they want than when they’re simply asked if they want more of one thing or another.
3) It assumes that the amount of sex in a society remains stable as contraceptive use becomes more widespread. In other words, if there are 20 unplanned pregnancies in a town each year and ten end in abortion, and condoms (or, if we’re talking about Third World countries, why not make it a funky experimental drug that’s too chancy to place on the US market?) prevent 90% of unplanned pregnancies (allowing for variations in use, blah blah blah), then if people have the same amount of sex pre- and post-condoms you’d expect there to be only two unplanned pregnancies the next year, hence only one abortion, hence a major reduction in the unborn body count. But if widespread availability of contraceptives causes people to take more sexual risks, or if it changes their sexual behavior patterns or their views of parenting or their views of pregnancy for whatever other reason, then you really can’t say what will happen to the abortion rate. (For example, if widespread contraception leads to a cultural change in which pregnancy is seen as something a smart girl could avoid, rather than something that happens to a lot of people and is an understandable failing, there might be more pressure to abort before anyone finds out you’re pregnant than there was pre-condoms. Or not–but we can’t know, is the point.)
Basically, predicting the future is enormously difficult, so social scientists try to make their jobs a little easier by assuming a static world.
The best social science work does two things to minimize these huge pitfalls: 1) It works hard to identify and acknowledge the assumptions on “both sides” (or several sides) of any contentious issue, and examines the issues from a number of perspectives and with several different sets of assumptions.
2) More importantly, it explicitly acknowledges that social science rests on philosophy. It explains: a) why the work’s authors believe that some premises and assumptions are better than others, i.e. why they controlled for some variables and not others, or why they asked a certain kind of survey question rather than another kind, etc., and b) why the work’s conclusions make sense. What’s the underlying reason for the conclusions? In medical research, I think this is called the mechanism–it’s the bridge between correlation and causation. For example, researchers had found a correlation between smoking and lung cancer, but until they figured out how smoking messes up the lungs they could only tentatively present this correlation as evidence of causation. If you want a book that identifies the “mechanisms” behind several facts about marriage and family life, I highly recommend The Abolition of Marriage (and its half-sequel, The Case for Marriage). AOM shows the connections between social science research and an inspiring vision of marriage and promise-making.