Performitiv logomark
Blog Posts

Don't Downplay Survey Data

Performitiv,
Performitiv
June 16, 2019
Jun 16, 2019 7:00 PM
CST

Often times, when we speak to L&D professionals, they refer to survey data (evaluation responses, predominantly) as self-reported and not valid, nor reliable, for them to use to tell their story of value or show evidence of impact.  While it is important to have a process to gather operational data from the business (sales, cost, quality, time, satisfaction, risk, productivity, and satisfaction) sometimes that data is unreliable, untimely or unavailable.  In these cases, asking participants or managers to provide their feedback on how the learning may or did impact a result is not only okay but a roughly reasonable source of evidence in the story of impact.

L&D controls the evaluation, and adding an indicator of business alignment should be easy to do and result in low change management.  Yet many L&D organizations will simply use the evaluation data collection process as a basic reaction (aka Level 1) as opposed to a means to gather evidence of impact to showcase value.  This isn't done because the evaluation is under-appreciated for not being 'real data' or 'hard data' or 'business data' and it is 'self-reported.'  But, what about other departments?  Do they use this data?  Hmmm, let's think about customer service as an example.  Don't they do surveys to gather feedback from customers?  We just filled out a survey as a result of a service we used.  We answered questions and shared our feedback on how we will use the online platform of the vendor going forward now that the system is setup.  Is our response a 'throw away' data point?  If it is, why did they ask it and why did they make us complete it?  It was there because the customer service team values that response to understand our future use of their platform.

Let's continue and go to marketing and a market research survey.  We completed market research studies before.  We answered questions about our experience and about our future behavior in using the solution going forward and if we'd recommend it to our friends, as well.  Is this data unreliable or invalid as it was self-reported?  We were being asked about our future behavior, which hasn't happened yet, so is that data not worth showing to someone?  The data was most likely used by the marketing team and their solutions team to understand anticipated use based on an initial experience and was likely important to them, hence they asked it.

So why in L&D, if we ask if the learning will directly be used to improve sales or decrease cost or create more motivated employees is it down-played as throw-away data?  Asking thoughtful business outcome indicators on the evaluations are roughly reasonable ways of showing the learning connection to the business outcome.  This is especially true if the actual operational data is unavailable.  This data can become a way to talk about the alignment between learning and the outcome, and if it is strong or needs greater emphasis, as well as if it is trending positively or not.  So it can be a healthy conversation piece if presented properly.

So how do we present survey data properly when using it for predictive business outcome alignment evidence?  Well, don't say it is perfect and precise.  Say it is roughly reasonable.  Next, don't say it is statistically causal, but can be associative or correlative, especially if the people who take the program show high degrees of business alignment as they progress through the learning program.   Finally, if you ask for the actual operational data but the business cannot produce it, explain this as the way that outcome will be reviewed against the learning, as it is a plausible way of doing so under the circumstances, and get buy-in to do it.

So, if we in L&D downplay the survey data as being some type of inferior data, that may be our hangup, not the stakeholder. Other lines of business use survey data in their analysis and it seems to be a reasonable approach to review impact.

At Performitiv, we have standard evaluations with a concise set of evaluation questions that go beyond Level 1.  In addition, we use conditional questions that appear on subsets of learning content evaluations so specific outcome indicators can be evaluated and used as evidence in stakeholder discussions.  

For example, an organization looking to gather evidence of employee engagement from a leadership program asked learners after program milestones "As a direct result of this learning, will you be more motivated and committed to our organization?" That question was meant to be an indicator of engagement and would be part of the evidence used in the discussion around learning's impact on engagement.  

Do you want to learn more about Performitiv and why we're the fastest growing learning analytics technology in the industry?  Contact Us to start a conversation.  

Thank you,

The Performitiv Team