First, some warnings about employee surveys:
- Simply running a survey tells people that change is coming, and that something will happen. Therefore, you must make sure something does happen and that people can see it happening.
- Surveys subtly affect change is by telling people what is considered most important; “you get what you measure.” Choose your questions carefully.
- The way the survey is done sends a clear message. People react to the language in announcements, to the nature and type of questions, to the process of taking the survey, and to the speed of communication and change once it’s over.
- You have to tell employees the results of the survey, or face disengagement, apathy, and “working by the rules.” Ideally, employees will get feedback sessions with action planning built in.
Some key validity tips
- Questions must be clearly understood by everyone; the language should be as simple as possible, to avoid literacy issues.
- Unless there are social expectations to fight, be direct.
- The shorter the sentence, the more likely people will read it (rather than scanning it).
- Phrase statements in a positive way, or the phrasing effect will drown out the content.
- Doing statistics (and taking the survey) is far easier when you have a single scale.
- Rank-orders are hard for the respondent, and even harder for reporting and statistics.
- Each concept gets its own question (avoid “double barreled” questions).
- Questions should be behavioral and concrete rather than conceptual, wherever possible.
- As a general rule to remember, people do not read instructions.
- Large print and frequent paragraph breaks increases the likelihood that adults will read the full text.
- Follow survey conventions so people don’t get confused. For employee surveys, this means go from left to right, negative to positive, with the most positive items having the highest numbers (e.g. strongly agree = 5 and strongly disagree = 1). On Web surveys, they normally aren’t numbered. Help respondents to fill out the survey using the right scale. One person using the wrong scale can wreak havoc — and it’s very common.
- Work hard to get as many people in the sample as possible to complete it, to avoid nonresponse bias.
- Avoid “binary” questions that “lose” information (“Are you satisfied?” should be “How satisfied are you?” and “Do you want this service?” should be “How much would you pay for this service?”)
- When necessary, define the anchors completely. While this creates a statistical violation (you can no longer simply assume the distance between each number is identical in size), the effects may be minimal, and you may be able to avoid a great deal of bias and guessing whether respondents are interpreting the scales the same way. (Rather than simply asking “How well does the organization’s mission guide your actions? — Completely to Not at all,” define each step, e.g. “I refer to it each time I make a decision,” to “I never use the mission to make real decisions,” with intermediate steps also filled in.)
A basic process for reliability and validity testing
This is a relatively fast process for basic reliability and validity testing. For the “more proper” method you can see the APA, AMA, or other research-organization Web sites.
Reliability is whether the survey gives you the same answers at different times, and whether the questions within it measure the same thing (only applicable if you’re doing a set of questions to measure a single issue, e.g. engagement, involvement, satisfaction, depression, etc.)
Validity is whether the survey measures what it’s supposed to measure. If a survey is not reliable over time, it cannot be valid, because it will vary depending on when it’s taken.
Ideally you’d check a new instrument against an older one that measures the same thing and has been validity tested already. However, most often people are developing something new because nothing exists already.
- Develop the survey after doing a literature search and gathering needed information.
- Use extra items where possible partly to deal with items that are struck out, and partly to provide some degree of internal validity testing via interitem correlations (that is, by seeing if any items within the survey tend not to change with the others.)
- Circulate to local experts for their opinion, for face validity.
- Pilot test.
- Ask 5-10 people to take it
- Ask them to tell you immediately if anything is confusing or hard to answer
- Watch where they “get stuck”
- Ask where things could be easier to understand, or better in general
- Ask for any criticisms.
- Give patents the survey, wait as long as you can, and give it to them again with the questions in a different order, and see if the link between the same people is greater than the link between different people.
- Find another method to compare the survey to, e.g. face to face interviews, and compare results of both; or by give the survey in a different form, e.g. open-ended questions / fill in the blanks.
- Repeat as needed.
- Ideally, when the survey is adminstered the first few times, have extra open-ended questions so you can do a “quality check” on the numerical data.