Trump and the Art of Questionnaire Design

Trump and the Art of Questionnaire Design February 24, 2017
Image from Alberto G. CC BY 2.0
Image from Alberto G. CC BY 2.0

President Trump and his team are in the process of carrying out an online survey. It is called the ‘Mainstream Media Accountability Survey‘ and is currently accessible on the Republican National Committee’s website here. As someone who spends a fair amount of their time designing questionnaires for psychology studies reading this relatively short questionnaire was akin to watching a horror movie.

So I thought I’d offer some helpful advice to the Trump administration so they can avoid  the common pitfalls they slam head first into with this survey. Despite there only being 25 questions, there is a lot to get through so let’s begin:

Screenshot 2017-02-23 14.53.33

This is what’s known as a leading question. It assumes that the respondent agrees with the viewpoint of the person writing the questionnaire that the media is doing a bad job and then instructs them to pick the cases they think are worst. A better way to ask the question would be to ask a neutral lead in question, like: Do you think the mainstream media overall does a good or bad job of representing Republican positions on issues? And measure responses on say a 9 point scale, 1- Extremely Poor, 5- Neutral, 9- Extremely Good. This avoids biasing responses towards a negative viewpoint. (This point applies equally to practically every question on the survey.)

Next, if the question was really intended to investigate the quality of reporting of issues how about a question like: Please rank from best to worst, the issues listed in terms of how you feel the mainstream media represents the Republican perspective. To be fair, you would also give people the option to opt out of answering and indicate that they think all of the issues were covered with a similar degree of fairness. (Although this questionnaire obviously isn’t about being fair…)Screenshot 2017-02-23 15.08.44

Right, so the mistake here is in assuming that people get their news from a television source. This question is akin to asking someone: Where do you primarily bury the bodies of people you kill? Under the patio _ ; In the desert _ ; At the bottom of a lake _ . See the problem? And it’s not resolved by the follow up question:

Screenshot 2017-02-23 15.12.56

The problem here is a lack of clarity about what is being asked: Q6 suggests that the topic of interest is preferred television news sources but the wording of this follow up question means there are two equally plausible interpretations: 1) Do you use a (television) source not listed above? OR 2) Do you use (another type of) source not listed above? Both are reasonable interpretations but they are asking different things. Crucially neither interpretation recognises the possibility that people might not get their news primarily from a single network. How is someone supposed to respond if they rely on a variety of sources across different mediums? They can spell it out in the box provided I suppose but if the question was designed better they wouldn’t have to waste their time (which again applies to almost all the other questions).

Screenshot 2017-02-23 17.05.39
This question is written very poorly. Fundamentally, it is asking a very simple question along the lines of ‘Do you believe X’, but unfortunately the author has chosen to present X using a negative tense, which biases the responses and makes the question hard to follow. Do I believe that the media DOES NOT DO due diligence BEFORE publishing stories? No, err.. Yes? Which one means I disagree with the questionnaire author? A positive tense would at least help reduce some of the ambiguity, although the bias issue remains: Do you believe the mainstream media needs to do more fact checking before publishing stories on the Trump administration?

Screenshot 2017-02-23 15.22.05This question seems to be confusing advocacy with gathering information but even ignoring that- what poll is the question talking about? The author seem to be concerned about whether the respondent is aware of some specific poll but fails to provide any identifying details, preferring instead to provide a subjective summary of the poll’s finding. Answering the question positively therefore requires endorsing the author’s interpretation of a specific survey but that is problematic, not least because the respondent doesn’t know which one the mean.

For instance, I am ‘aware’ that there are a few polls, such as this Rasmussen one, which indicates that there is a majority of support for Trump’s immigration order, however I am not ‘aware’ that this means a majority of Americans actually support Trump’s order. These are two very different things. The former assumes that individual polls have the power to accurately reveal how the entire US population thinks, whereas the latter involves acknowledging the result of one specific poll from one selected sample which may or may not be representative of the US in general. I can elaborate this view in the box provided- so that’s something – but why not just ask clearer questions like: Were you aware that a recent Rasmussen poll found a majority of responses supported Trump’s temporary travel ban executive order? Or to be less transparently biased, how about something neutral like: What % of Americans do you think are in favour of President Trump’s recent temporary travel ban executive order? This would also provide more fine grained data about the respondent’s actual perspective on the topic of concern.

Screenshot 2017-02-23 16.32.07Remember that point I made about leading questions, this is another textbook example. The question isn’t even about the media, it is about the relationship between taxes and job creation. There is no need for the unsourced swipe at ‘the media’. The respondent here has no idea what specific ‘media’ is arguing that raising taxes creates jobs but stating that will inevitably bias the respondent’s answers concerning the media for all of the following questions. Furthermore, twisting the question up to make a thinly veiled attack results in the wording again becoming needlessly confusing. The question asks whether the respondent agrees that they do not agree with the media that raising taxes does not create jobs? Seriously? Why not just simply ask a neutral positively phrased question like: How far do you agree that lowering taxes leads to more jobs? I mean, I know why not, but they could at least pretend.

Screenshot 2017-02-23 16.43.58Screenshot 2017-02-23 16.44.11Screenshot 2017-02-23 16.44.37Screenshot 2017-02-23 16.50.02
At this point it seems redundant to mention the leading question issue but wording like that used in Q17 & Q24 adds in the wrinkle of demonstrating that the questionnaire author has a preferred view and hence that there is an implied ‘correct’ response. By this point in the questionnaire that the author has a massive bias is hardly a secret, but saying things like ‘our movement‘ and ‘our message‘ completely dispels any semblance of neutrality. It also problematically conflates Trump’s positions with the view of the wider Republican movement. What do you answer to Q24 if you disagree with Trump’s position but agree with delivering Republican messages ‘straight to the people’?

In summary the survey is a complete shambles. From minor issues like the use of confusing wording and providing restricted responses, to more major problems like the evident severe bias and repeated use of leading questions.

But the cherry on top relates to the sample. It is often the case that political polling will have some bias in the population sampled and this is somewhat inevitable when the survey is focusing on a specific target group- like Republicans. Yet despite their best efforts to only gather responses from a sympathetic population the Trump campaign are not happy with the sample they collected, presumably because the results did not reveal the patterns they wanted. To resolve this they issued another call via email asking more Trump supporters to come and fill out the survey…

The level of overt bias and clear efforts to manipulate results is so transparent that it is almost funny, but thinking about how the survey results are likely to be used it is a little harder to see the funny side.

"I"ve been having doubts about Michael Shermer regarding one or two staff contributors to his ..."

Michael Shermer endorses popular alt-right Youtuber ..."
"Seems to be the real-life cognate is parents getting kids to behave by reminding them ..."

Does priming religion make people more ..."
"Ya, a trained corvid would do better"

New Discovery of Ancient Neanderthal Rituals?
"This article is from the middle of 2016 and the most recent comments are from ..."

New Discovery of Ancient Neanderthal Rituals?

Browse Our Archives

TRENDING AT PATHEOS Nonreligious
What Are Your Thoughts?leave a comment