White paper: Research Methodology
There are several critical design considerations when constructing interval-rating scales. These considerations address four major areas: simplifying questionnaire development, reducing respondent confusion, generating true interval data and promoting a dispersion of results. (Van Bennekom, F., in Support Services Questionnaire Library, 2002.)
A rating scale offers respondents an opportunity to select a response from among several options arranged in a hierarchical order. Selecting a scale for a survey question is not a trivial decision. There is no general agreement on which would be the optimal number of points in scale and what kind of wording should be used. However, the practitioners agree on many basic principles:
- The number of points in scale should be at least three and no more than seven. Many researchers tend to use long scales, for example 10-point scales. Although long scales may seem to gather more data than shorter scales, they don’t necessarily discriminate more accurately among respondents. In data analysis, these longer scales are typically collapsed into shorter scales. “The debate in literature is ongoing; however, it is safe to say that 4- or 5–point scales will be serviceable for most attitude and opinion data collection.” (Conducting online surveys, Sue V., Ritter L., 2011.)
- Even or odd: Even-numbered scales tend to more efficiently discriminate between the positive and negative positions, as there is no neutral opinion. However, this can sometimes cause respondents who are genuinely neutral to hesitate, especially if no don’t know-option has been made available. “Even-numbered scales ease the task of designing scales, because developing descriptors for midpoints is a real challenge. Let’s say you are using the Strength of agreement scale. The endpoints are easy, Strongly Agree and Strongly Disagree, but what is the midpoint anchor? Here are typical choices for midpoints: Neutral, Indifferent, Don’t Know, Neither Agree nor Disagree. None of those is very good.” (Van Bennekom, F., in Support Services Questionnaire Library, 2002.)Who then chooses the midpoint? It is often a very mixed bunch of people, consider a 1-5 scale in an employee engagement survey: the midpoint will most probably be chosen by those who really are neutral, those who want to fill-in the questionnaire as quickly as possible without giving out there genuine views, those who don’t have a clue on the subject etc. Having a midpoint easily leads to a peaked distribution of answers or as Bagozzi (1994) says: “Respondents are more likely to volunteer a neutral response, if it is offered as an option”.
- Inclusion of “Not Applicable or Don’t Know”: A key design principle with any survey questionnaire is that there must be response applicable to every respondent. If there isn’t, then the respondent may skip the question leaving it blank or – even worse – choose the neutral option, which distorts the data set. So, this option should be available always, if there is a possibility that this would be a relevant choice for some of the respondents.
Questionnaire wording and design
- Questions should be formulated in a precise and easily understood way. One of the common mistakes is to include two questions inside one. For example: “The strategy of our company is clear to everyone and supports our development.” If you have the word “And” in the question, make sure that you haven’t fallen in the double question trap.
- Avoiding response ruts: “One problem with using an interval scale for a long list of questions is that respondents can fall in a response rut. The respondents may stop reading the precise wording of the question and mechanically give the same response to every question. Reverse-coded questions combat this. Here, an occasional question is phrased in the negative. By using one of these early in a list of questions, you tell respondents they need to read the questions carefully. Questions about the absence of something, e.g., the absence of problems or stress are natural candidates for reverse coding.” (Van Bennekom, F., in Support Services Questionnaire Library, 2002.)
- Proper anchor placement: when you set up a grid or matrix for a hardcopy or web survey instrument the text anchors of the numerical alternatives should be clearly communicated and the spacing between numbers should be equal. Especially with web forms, it is common for the column width to be driven by the width of the text column. This presents a visual picture to the respondent, that the visually wider options require a bigger jump to the next option than the narrower ones. Example by Corporate Spirit of a balanced lay-out of an EES questionnaire and use of reverse-coded question very much in the beginning:
- Cultural effects: Because Likert-type surveys are usually self-administered surveys, the reading level of respondents must be considered. Sometimes survey questions are read to respondents in cases where reading level might be an issue. To help these respondents keep the options in mind, it is important to limit the number of answer options and keep the same option categories throughout the survey.
- West vs. East: Steven Si has studied how western and Asian respondents react to questionnaire items with or without explicit midpoint. He found out that the Confucian teachings regarding the “middle way” (=avoiding the extremes) have affected Asian respondents so that they are more often choosing the middle alternative than their western counterparts. To facilitate inter-cultural comparison the middle point in questionnaire scales should be avoided when used in both Western and Asian settings. Also the Asian’s tendency to “save face”, both one’s own and the face of others has often lead the Asians to avoid openly critical views when asked about personal relationships. This phenomenon has demonstrated itself also in the recent engagement surveys conducted by Corporate Spirit: the amount of clearly critical answers in personal relationship-related questions by Chinese respondents has been about 50 % lower than that of the Western respondents working in the same companies. When interpreting the results, the effect of this can be compensated by comparing the results against reliable country-specific benchmarks.
Reporting Employee Engagement Survey
- Language of the report: If the language of the questionnaire for any reason is poorly understood by the respondents, the validity and reliability of the results will be severely compromised. It is easy to agree on this, but what about the language of the EES report? I would argue that poor understanding of the language is equally devastating the purpose of the survey. Take an example: the report is written in English, but the managers in the unit are not very fluent in the language as their mother tongue is Spanish. Considering that EES deals very much with interpersonal relationship and other delicate matters at a workplace, all misunderstandings could be detrimental to the workplace atmosphere and morale. That’s why every person should always get the EES report for their own unit/team in his/her own language.
- Average Figures vs. Percentage favourable: There are two common ways of reporting the results of employee engagement surveys: Average figures (as such or related to a benchmark) and the percentage of favourable answers. The former is quite precise figure demonstrating clearly all changes. The latter (%-favourable), especially used by American research institutes, on the other hand is vaguer: It is normally calculated as a sum of percentages in the positive answer categories, leaving out the possible neutral and negative categories. So, a possible %- favourable result for an item could be: 77 %.However, this can mean e.g. that 7 % have been extremely favourable and 70 % somewhat favourable. If you get a result like this and you are the manager of this group of people, what would you intend to accomplish? Most probably you would try to get a major part of the 70 % to be extremely favourable. But if your efforts would lead to a situation where say 57 % are extremely favourable (a fantastic result) and only 20 % somewhat favourable you would end up with exactly the same figure: 77 % – so no bonuses in sight!
– Bagozzi R., 1994, Principles of Marketing Research
– Lodico et. al, 2006, Subcentennial-scale climatic and hydrologic variability in the Gulf of Mexico during the early Holocene
– Salkind N., 2010, Encyclopedia of Research Design
– Steven X. Si, 2005 Research Methodology for Cross-cultural Management Studies: A Comparison of Questionnaire surveys
– Sue V.- Ritter L., 2011, Conducting online surveys
– Van Bennekom, F., in Support Services Questionnaire Library, 2002
– Zikmund-Babin-Carr-Griffin, 2002, Business Research Methods
 Lodico et. al 2006: Likert didn’t believe that there were “neutral” people walking around and that even if you were not passionate about an issue, you would at least feel a little something one way or the other.