Truth by Survey? (RJS)

This opinion piece by Lee Dye from abcNews meshes well with several of the recent topics of discussion – including the extensive discussion of Rodney Stark’s book What Americans Really Believe, the post on Living in Denial,  and the ever heated consideration of global warming.

Global Warming and the Pollsters: Who’s Right?

Is the glass half empty, or half full?

Public opinion polls on global warming seem to be all over the map these days. A Gallup poll in March indicated that nearly half the people in the United States think the consequences are exaggerated and they’re not particularly worried about their future. But two polls released in the last few days show that most Americans believe global warming is real, the consequences could be great, and it’s largely our fault.

Truth by survey would be of no consequence if the polls and discussion about them only provided data – they didn’t presume to predict behavior or sway opinion. This opinion piece (like Stark’s book I believe) discusses data – but has an agenda.

“The take-home message is, the more you know about the field of climate science, the more you’re likely to believe in global warming and humankind’s contribution to it.”

Although the polls do not all agree on the numbers, or even the trends, the underlying message is that most people are now recognizing that global warming is real, the consequences are significant, but at the moment they don’t rank up there with getting a job and protecting the country from terrorists.

Public opinion changes, sometimes on whims, sometimes on disasters. But if there’s one thing all of these polls show, it’s that people are paying attention now, more so than ever before. There may be some confusion, and the consequences may seem distant and imprecise, but most Americans are now coming to grips with global warming and what we need to do to stop it.

How accurate are pollsters at determining what Americans really believe – be it about spirituality, religion, or science? Do you pay attention?

  • Robin

    Any type of research that is not a randomized experimental trial with a double-blind design is worthless.

  • dopderbeck

    Actually, I think it’s much better to gauge public opinion through blog comments. ;-)
    I think the end result of the discussion in the other thread was this: broad social science surveys can have a place, but we should be very careful about basing any grand conclusions on them.
    Personally, I take all qualitative social science research with a healthy dose of skepticism.

  • RJS

    What? That is a bit strong – especially the “double blind” bit. Certainly it is the preferred method in a clinic trial of drugs, but how would you apply it to opinion survey? It can’t even be applied to most clinical treatments.

  • Robin

    My real opinion is that they are OK if you don’t take them too seriously. They are probably the second poorest data source for social science research, coming in ahead of qualitative designs. We know that people lie (Bradley Effect) and we know that opinions or preferences can change depending on the weather outside (how many stories about global warming to we get during hot summer months – this affects public opinion just like record snows in the winter). However, if we want to know things about people’s inner workings or traits not immediately available by external actions I don’t see a better way of getting the information.

  • Robin

    Just for clarification, when I use the term qualitative research I am referring to the recent research trend of conducting long interviews with a small sample of respondents (I’ve seen really low, like 6) and then putting the transcripts into a computer program that analyzes them for similar themes and hidden meanings. This has taken off in social work research (and I believe sociology).
    I know the double blind standard is high, and I guess really silly if you’re not talking about medicine. However, the randomizes experiment is absolutely essential for any research to have validity. True random experiments are feasible in medical studies (if you ignore the fact that the population using the hospital is likely not random); I assume they are possible in most fields of actual science. In social sciences they have to be approximated, either by (1) searching for natural phenomenon that approximate the conditions of a natural experiment and studying the situation after the fact (2) overly complicated statistical methods intended to control for the lack of a random experiment or (3) randomizing other factors, such as the sample group. All 3 social science approaches lack the ‘true’ standard of randomized experiment and open themselves to endless nitpicking, but they are the best option we have. The only alternative I see is to throw our hands in the air and say it is impossible to know what the effect of an increase in the minimum wage will be, or an increase in the price of gasoline. We do the best we can with imperfect tools.

  • Larry

    The real problem is that polling generally isn’t transparent enough, often times the exact question asked isn’t revealed, and this can cause a huge difference in responses, “Do you prefer our current free-market health care approach or the socialist approach of Obama?”. Response rates are often times not revealed, nor the method of selecting the respondents in the first place. In fact many pollsters have an agenda, or those paying them do, and there are many ways to arrange for the outcome of a poll to agree with what whoever is paying for it wants.

  • DRT

    I consult for a very large consumer oriented company. In this company (which is definitely a quant company) surveys, focus groups and all sort of data are used to determine the design of experiments, however, a statistically designed experiment is used to determine validity.
    I think I agree with this approach. Most surveys and other investigations are great for getting ideas, and even for forming hypotheses. But they really are not the best for determining validity.
    Take the seemingly simple task of determining who is going to win an election. The only real way to know is to run the election.
    In my survey of one person (me) I have concluded that most surveys are not conducted with a scientific approach. And all of us most assuredly would concede that it would be best to approach things in a scientific way, right? In looking at the data behind my survey I realize that my survey results may be contaminated by the jaded and cynical nature of the population under examination….

  • Jason Lee

    In random samples or national samples there is a range of quality, eg:
    1. Response rate (people who actually agree to be surveyed) can vary. Polls like CNN, Pew, CBS, and probably many other popular-type and consumer surveys have relatively low response rates. Whereas careful bajillion dollar surveys like the GSS or the National Survey of Youth and Religion have much higher. We can trust the latter more.
    2. The quality of the questions. I think several people on this blog have made astute critiques of some of the survey questions discussed. But no question is perfect, and so we shouldn’t take this too far.
    3. Do the survey results match those of other surveys. What’s interesting is that many surveys come up with roughly similar results independent of each other. But when a survey is way off from the others, we should be cautious.
    There are of course many other quality related issues. We’ve not even gotten into how the data is analyzed. All of this sounds like a lot of things though… People want to be simplistic about the Bible, but when it comes down to it it was translated and certain methods of translation are better than others. Some people want to throw it all out when they learn how messy translation work is. The same is true with survey data. This is probably not the best comparison, but I think you get my idea. The best advice is not to take the sexy new stat and run with it too far without comparing it to some other reputable study or comparing response rates and whether the survey was random.

  • Robin

    This might be slightly off topic, but I think it is appropriate.
    When I was training to be a social worker we had to take several courses dealing directly, or tangentially, with diagnosing mental disabilities. I kid you not that the tools we primarily used were survey instruments with questions that looked remarkably similar to the one used by Rodney Stark. Here is an example of one such survey/questionnaire for Schrizophrenia
    If we are saying that such types of questions are insufficient for the relatively simple task of diagnosing evangelical affiliation, what does that say about the state of our mental health field?
    I understand that Ph.D. level psychologists will have a keener intuitive understanding of such diagnoses, but for Master’s level social workers, this was the type of instrument used to diagnose everything from ADHD to schizophrenia (Not to mention the lowered/altered mental state of potential schizophrenics truthfully and accurately answering a survey).