slide3
slide3
slide4
slide5

Online surveys and online survey software:


Organized Change was instrumental in helping Wavecom (a French-based telecommunication company) identify and solve internal tensions hindering organizational growth. In a few weeks, Organized Change brought solutions to bring clarity

I have been conducting surveys since 1985, and our firm was one of the first to use online surveys in 1996. Certainly we have come a long way since then, but for better or worse, principles of good survey design and use still apply.

Here are some questions to ask before conducting an online survey.


How do you integrate online surveys into a larger effort?

It’s really easy to collect data and slap together something online.

If it’s a small online survey for a small number of people, software is there to do it for free, or as part of a CRM (customer relationship management) application and various bulk email programs.

Unless, however, it is tightly integrated into the strategy directives of your company, the people who decide things influence its content and see the results, and those decision-makers use the data, having a bunch of bits nicely in an Excel file won’t do much good.


Do you have the skills, knowledge and neutrality to manage this?

Do you (plural now, as hopefully you have a team) have good statistical skills? Do you know factor analysis from discriminant analysis? Do your presenters come across as neutral presenters, not aligned with just one power group in your company? Do you have excellent communication skills that allow you to translate sometimes abstract, noisy data into something meaningful?


What data should you collect, and what scaling do you use?

Yes, I know, this article is about online surveys. Sometimes though, they are not the best way of collecting data. In three of our client's cases, they were done on paper. One client didn’t have enough computers among its 5000 employees; The other two were almost entirely engineers and programmers, and were reluctant to convey potentially explosive information that could be tied to an IP address.

In addition, you must serious ask yourself the question about what scale to use in your online survey. Many by default use a “agree-disagree” scale, where the options are strongly agree, agree, neutral, disagree and strongly disagree. This is one of the worst scales to use, and should be avoided at all costs.

If you must know why, there are inherent difficulties with “biplolar” scales: Just because you strongly disagree with a statement doesn’t mean you would strongly agree with its opposite. In addition, there is a positive response bias related to socio-economic status, so that lower income respondents tend to agree with statements to the extent of contradicting themselves.

Should you use online surveys or software at all?

There are pluses and minuses to any way of collecting data, surveys included. Surveys are “reactive”, that is people know they are being asked. Sometimes this is the only way, often the easy way, but not always the best way. Surveys also give people a choice about whether to respond or not, and survey responders tend to be different folks than who don’t respond.

This realization is also an opportunity for creativity. For example, a museum wanted to know the popularity of some children’s exhibits. You could, for example, create a big stuffed toy with a tablet inside. As children walk by, a sign would flash and ask, “What did you like?” in big letters. Some children might respond and who knows fill out your survey many times.

Instead of this reactive way, the museum decided to use a passive way of collecting data: nose prints. They counted these every hour, and by the way, got a rough guess as to the children’s age by the height of the print. My guess is they found a number of adult noses along the way. By using this non-reactive approach, they developed an indicator of exhibit appeal, all without using survey software.


How do you want to breakout the data using online survey software?

The good news: It’s really easy to do this. The question becomes what breakouts are really important? It is also vital to think about how you ask these seemingly easy questions such as gender, and what department you are in. Facebook has many categories in the gender and we spent hours with a client trying to come up with words that employees of all levels could identify as their “department”.


How do you identify the differences between groups?

Let’s say that management wants to know what groups they should focus on: whether perhaps, they have different needs, or one is in more trouble than another. All too often, I’ve seen “percentage comparison charts” try to do this job. They show, for example, that group A has this percentage of answers in a scale category, and a lower percentage in another. Group B’s results are shown in comparison, along with group C’s. You see a dizzying array of all these percents, all nicely color coded. Unless the answer is blindingly obvious, it’s awfully hard to tell.

The reason for this is because your using percentages. This is especially a problem when you use agree-disagree scales, and also a problem because items in your survey are often correlated with each other, so each is not an independent indicator. What you think distinguishes one group from another may be look simple, but is often deceptive.

The answer around these questions is usually a combination of multiple regression, logistic regression, and discriminant analysis.


How will you present the data from an online survey in an actionable form?

To be clear, what I don’t mean is to pre-configure things so you get the results you(or management) wants. What I am suggesting is to decide on the formats data should be presented, and what graphics should be displayed. If you to do this before data are collected, you can focus on formatting sample data without being biased. This may also give you some feedback on how to design the survey to collect data that are more actionable.