Turn Passive Learning Evaluation Respondents Into Active Ones

In recent years learning evaluation responses have declined and the data received is not as meaningful. It seems that most learners want to get through the evaluation as quickly as possible and not give it much thought.

What can be done to change this situation? How can we turn a passive learning evaluation respondent into an active one? At Performitiv, we’ve done it and we did so with three changes that any learning organization should consider making to their evaluation forms.

First Change: Shorten the Evaluation.
Over the years evaluations seemed to get longer and longer. While technology tools can support lengthy evaluations, the respondent experience is actually hurt by this practice. In today’s world where over 50% of evaluations are completed on mobile devices, long evaluations are easy to dismiss because they appear to be complex and cumbersome just by scrolling down to identify 30+ questions on the form.

To solve this, shorten the evaluation. The odds are 5 instructor questions are not needed when 1 will do. In general, an evaluation with more than 10 questions is too long because most learners don’t have the time and patience to spend on answering more questions than that. So, adjust to their reality and shorten the evaluation.

We suggest keeping the evaluation methodology sound though. That is why we created evaluations that combine historically sound learning measurement methodology with modern performance improvement methodology so the instruments are concise yet credible.

Second Change: Simplify the Evaluation.
Simplification is NOT the same as a shorter evaluation. Simplification means to make the evaluation easier to quickly understand and complete. To do this, avoid asking questions that require a lot of time and effort to consider as most respondents won’t put in that time and effort. Additionally, resist the urge to do a bunch of matrix questions or conditional questions. This adds complexity to the process and likely leads to respondents exiting the evaluation, incomplete.

Our suggestion is to use as few question types as possible for consistency and use a consistent scale. What we’ve found is adopting the 0-10 scale of the Net Promoter Score has significant data and validity behind it and all questions could then be measured via an average, NPS and % top box.

Also, consider humanizing the questions themselves. For example, instead of saying ‘Overall Comments:” consider saying “Do you have anything you’d like to share with us? It’s okay, we listen.” This change of wording softens the tone of the question and may yield more responses. Keep in mind not every culture is ripe for the humanizing of the questions but consider it as a way to simplify the questions that appear to be too clinical in their current format.

Third Change: Invest in User Experience.
As previously mentioned, over half of respondent’s complete evaluation requests on mobile devices. As a result, the experience is not the same as a desktop. First, scrolling is not great on a mobile device. A good user experience has multiple ways to navigate the form and optimizes its presentation for the device it is being viewed within. For example, with our evaluations, mobile users will see a question on their screen, click next and that question moves off the screen and another rotates up. They also see that they are on question 2 of 10.

You’ll want to work with a user experience expert or use user experience optimized evaluation tools to make this change. Otherwise you may risk making the experience worse.

The key is to optimize for the device and to allow multiple paths to navigate that are most comfortable for the respondent (ex. scroll, rotating questions, click-advance). Also, if you’ve asked the respondent once before for demographic data, good user experience should have saved it and brought it up again so the respondent doesn’t have to complete it again, unless something has changed.

An investment in user experience is prudent to do to not only modernize your evaluation presentation but to better engage respondents in any device.

Conclusion:
Create active respondents that will provide higher response rates and more complete responses. Do this by shortening the evaluations, simplifying the complexity out of evaluations and leveraging basic user experience techniques to optimize the respondent experience.

What are your tips to creating active learning evaluation respondents? We’d love to learn about what you’ve done and how it’s working for you!

Thank you,
The Performitiv Team

Blog Archives

December 1, 2018

Learning Impact is Important No Matter How You Slice It

Read More ➝

November 15, 2018

Performitiv – Come See Us!

Read More ➝

November 7, 2018

Learning 2018 – Performance and Impact Matter!

Read More ➝

November 1, 2018

Learning Measurement, It’s Top of Mind

Read More ➝

October 31, 2018

Performance Optimization Framework for Learning

Read More ➝

October 26, 2018

Performitiv Presenting at Learning 2018!

Read More ➝