Personal change management questionnaire

How to write a change management survey that is valid

Thank you for Signing Up

An important part of measuring change is to be able to design change management surveys that measure what it has set out to measure. Designing and rolling out change management surveys is a core part of what a change practitioner’s role is. However, there is often little attention paid to how valid and how well designed the survey is. A survey that is not well-designed can be meaningless, or worse, misleading. Without the right understanding from survey results, a project can easily go down the wrong path.

Why do change management surveys need to be valid?

A survey’s validity is the extent to which it measures what it is supposed to measure. Validity is an assessment of its accuracy. This applies whether we are talking about a change readiness survey, a change adoption survey, employee sentiment pulse survey, or a stakeholder opinion survey.

What are the different ways to ensure that a change management survey can maximise its validity?

Face validity. The first way in which a survey’s validity can be assessed is its face validity. Having good face validity is that in the view of your targeted respondents the questions measure what they aimed to measure. If your survey is measuring stakeholder readiness, then it’s about these stakeholders agreeing that your survey questions measure what they are intended to measure.

Predictive validity. If you really want to ensure that your survey questions are scientifically proven to have high validity, then you may want to search and leverage survey questionnaires that have gone through statistical validation. Predictive validity means that your survey is correlated with those surveys that have high statistical validity. This may not be the most practical for most change management professionals.

Construct validity. This is about to what extent your change survey measures the underlying attitudes and behaviours it is intended to measure. Again, this may require statistical analysis to ensure there is construct validity.

At the most basic level, it is recommended that face validity is tested prior to finalising the survey design.

How do we do this? A simple way to test the face validity is to run your survey by a select number of ‘friendly’ respondents (potentially your change champions) and ask them to rate this, followed by a meeting to review how they interpreted the meaning of the survey questions.

Alternatively, you can also design a smaller pilot group of respondents before rolling the survey out to a larger group. In any case, the outcome is to test that your survey is coming across with the same intent as to how your respondents interpret them.

Techniques to increase survey validity

1. Clarity of question-wording.

This is the most important part of designing an effective and valid survey. The question wording should be that any person in your target audience can read it and interpret the question in exactly the same way.

2. Avoiding question biases

A common mistake in writing survey questions is to word them in a way that is biased toward one particular opinion. This assumes that the respondents already have a particular point of view and therefore the question may not allow them to select answers that they would like to select.

Some examples of potentially biased survey questions (if these are not follow-on questions from previous questions):

3. Providing all available answer options

Writing an effective survey question means thinking through all the options that the respondent may come up with. After doing this, incorporate these options into the answer design. Avoid answer options that are overly simple and may not meet respondent needs in terms of choice options.

4. Ensure your chosen response options are appropriate for the question.

Choosing appropriate response options may not always be straightforward. There are often several considerations, including:

For example, if you choose a Likert scale, choosing the number of points in the Likert scale to use is critical.

5. If in doubt leave it out

There is a tendency to cram as many questions in the survey as possible because change practitioners would like to find out as much as possible from the respondents. However, this typically leads to poor outcomes including poor completion rates. So, when in doubt leave the question out and only focus on those questions that are absolutely critical to measure what you are aiming to measure.

6.Open-ended vs close-ended questions

To increase the response rate, it is common practice to use closed-ended questions where the user selects from a prescribed set of answers. This is particularly the case when you are conducting quick pulse surveys to sense-check the sentiments of key stakeholder groups. Whilst this is great to ensure a quick, and painless survey experience for users, relying purely on closed-ended questions may not always give us what we need.

It is always good practice to have at least one open-ended question to allow the respondent to provide other feedback outside of the answer options that are predetermined. This gives your stakeholders the opportunity to provide qualitative feedback in ways you may not have thought of.

To read more about how to measure change visit our Knowledge page under Change Analytics & Reporting.

Writing an effective and valid change management survey is often glanced over as a critical skill. Being aware of the above 6 points will get you a long way in ensuring that your survey is designed in a way that will measure what it is intended to measure. As a result, the survey results will be more bullet-proof to potential criticisms and ensure the results are valid, and provide information that can be trusted by your stakeholders.