June 15, 2018

In learning evaluation, don’t debate decimals

Learning evaluation is a great way to receive constructive feedback on a learning experience.  The feedback could range from the quality of the facilitation to the perceived relevance and applicability of the program to one’s job.  So, evaluations are incredibly meaningful if designed and delivered with thoughtful consideration.

However, it appears that analysis of learning evaluation data takes on super natural powers and users of this data tend to forget it is the subjective feedback of the learners and while valuable, should be viewed as roughly reasonable data that is neither perfect nor precise.

This is why, when learning organizations use the average of evaluation scores and calculate those averages out to two or more decimals it can be unhealthy and maybe dangerous.  We have seen significant energy debating a comparison between one course scoring 4.51/5 and another scoring 4.48/5.  We have seen facilitators be rewarded for a score of 6.81/7 but penalized for a score of 6.67/7.  The debate of the decimals is a warning sign that you need to rethink and modernize your learning evaluation process.

First, consider calculating the NPS (Net Promoter Score) on the data and the % Top 2 Box.  Unlike an average that tends to hide outliers, these scores tend to highlight them, especially NPS.  You don’t need to gather more data, just calculate your evaluation scores in some other ways.

Second, set goals for performance but analyze percent variance from goal.  After all, measurement is about understanding performance and improving it where needed.  But rather than debate the decimals of actual to goal, translate the variance between actual and goal into a percentage variance and use that for your analysis.  Most of the decimal debate will likely go away in this case as the variances will be minor if the actual difference in data is only seen at or beyond the 2nd or 3rd decimal point.  In fact, many of those decimal debates learning professionals have today over evaluation data are actually 1% or less in variance.  Showing that variance helps the report user realize how immaterial the debate over decimals can really be.

Third, use AI (Artificial Intelligence) to help highlight when data is significant or not.  For example, let a system review not only the percent variance of actual to goal but also factor into consideration elements like the number of responses in the data set so you’re not just removing the decimal debate but also avoiding overreacting to limited data sets.

Fourth, while evaluation data is a good baseline complement it with other data.  Utilization data such as completion rates help you understand if the learning investment is even being consumed.  In the case of new learning modalities like micro-learning, consider looking at behavioral paths learners take when experiencing the learning or consider the average time people are in certain elements of it.  If you can then break this out by demographic it can help identify what makes for the optimal use for impact for a particular role.

Finally, don’t be afraid to simply collect business result data that should be aligned to the learning.  It is okay if you cannot highlight the exact impact learning had on the result.  Roughly reasonable is okay.  If most learners conveyed a high evaluation score for impact that may be sufficient to show alignment.  But it certainly helps to see a trend in the business data.  If that business data is below goal for multiple periods, use that as an opportunity to collaborate with the business partner to solve for this problem vs. defend the learning.  Think about how the learning might adapt and how the business might adjust to target the problem at hand and jointly fix it.  Maybe the 1 day workshop is okay but a 90 second video could target that underwhelming business result better.  Maybe more training isn’t the answer at all, maybe its procedures or incentives that do the trick.  So, don’t fear discussions where data is not at or above goal, use it as an opportunity to be a better business partner and to think about a problem from a different perspective.  In the end, if that result starts to trend better from re-thinking the problem, the debate over the decimals and over learning’s value becomes an afterthought.

Blog Archives

March 18, 2019

Transition Doesn’t have to be a Challenge

Read More ➝

March 11, 2019

Learning Measurement, But Different

Read More ➝

March 4, 2019

Evidence of Impact is in Popular Demand

Read More ➝

February 25, 2019

Analytics is About Actionable Insights, Not More Reports

Read More ➝

February 21, 2019

San Francisco Learning Impact Workshop

Read More ➝

February 18, 2019

Tell the Story of Impact

Read More ➝