Evidence or Proof of L&D Impact? Why Not Both?

In the world of Learning & Development (L&D), impact is super important.  It means L&D is focused on value creation.  It means L&D is focused on performance improvement.  So, the need to measure impact is the #1 objective learning leaders have when it comes to their measurement strategy, process and technology.  Given this is such a priority to any L&D executive, this blog post will discuss the 3 options to measure L&D impact.

Option 1: Ad-Hoc

Option 2: Evidence-Based

Option 3: Proof-Based

Option 1 should be dismissed quickly.  It means there is no data to support impact.  It means you believe there is impact because you heard a few people around the water cooler saying there was some impact.  It means that an L&D manager said they ‘felt’ there was impact because the program was delivered without incident.   Feeling or believing there was impact is not measuring impact.  Ad-hoc approaches are not measurement.  These are typically isolated observations or a few people who have an opinion and while that is nice, it is not sufficient to understand learning impact.

Let’s move to Option 3 next.  This is where you assemble significant amounts of data, run statistical models like regression analysis to show a link between the learning program and the business outcome.  This may also be part of a periodic project where your team or a 3rd party consultant conducts an in-depth analysis on the program and its related impact and writes-up a report summarizing the findings.  In this proof-based approach you’ll likely see a causal model created by statistical experts and you’ll likely be working with ROI Impact Study certified practitioners.  Both offer significant credibility and will provide a highly meaningful report to understand impact in a convincing manner.

Compared to the Ad-Hoc option, The Proof-Based option is a very sound model when you have a strategic, visible, and costly program being questioned for its value and/or its cost.  It makes sense to plan to do an in-depth analysis via the causal model or Impact Study to comprehensively conclude if there was impact or not from the program.  We’d highly suggest doing this on your material programs episodically anyway, it is just good measurement hygiene.

However, if you’re a typical commercial or corporate learning operation, a Proof-Based approach, while ideal, is not practical on a repeatable basis.  Hence your strategy must involve something far more credible than the Ad-Hoc approach but with less resource intensity of the Proof-Based approach.   Enter the Evidence-Based approach.

Now let’s discuss Option Two, the Evidence-Based option.  This is where you leverage your ongoing evaluation process to make measuring impact more meaningful.  You couple this data with trending actual results to gather a roughly reasonable view of impact.  The Evidence-Based approach is never going to be as precise as the Proof-Based approach and it is not designed to do so.  But it is much more practical to do on an ongoing basis.  The Evidence-Based approach is also not an Ad-Hoc approach either as it is a formal process to gather data, aggregate it, report it and analyze it to draw conclusions.

The good news is that the Evidence-Based approach can be done on an ongoing basis with limited resources.  It has the option to integrate with the LMS, micro-learning platforms and other technologies to become a measurement process as opposed to a measurement project.  There are four steps to deploying the Evidence-Based approach as an impact measurement process.  These simple steps are as follows:

Step 1: Collect Impact data (defined as data largely controlled by L&D and linked to a learning program).  Do this in both a predictive way by asking questions on your end-of-program evaluations about future job impact and alignment to business results.  Do this again on follow-up evaluations back on-the-job.  Also gather evaluation data on program quality and knowledge gain.  Doing these routinely and integrated with your LMS will establish a baseline of reporting impact not only by program but also by modality and participant demographic.  Using an API connection to the LMS makes the process fully automated and secure.

Step 2:  Gather outcome data (defined as data influenced by L&D but not directly linked to a learning program).  Gather this information both as a trend (by month or by quarter) at least a few periods before, during and after learning and where feasible, against a naturally occurring control group of personnel that did not take training.  Doing this will set you up to start looking at impact in complement to the data collected on your evaluations.

Step 3:  Share impact scores on the program (from Step 1) presented as both trends and against a goal.  Report this data by program and by personnel demographic to look for positive trends and performance above goal across these data tags.  At the same time, review your outcome data (from Step 2) and look for associations and correlations in the data where it is trending positively over time and/or against the control group.  Your story of evidence is told in both sets of data presenting a roughly reasonable yet data-driven view of impact.

Step 4: Act on the data (from Step 3) to reward and recognize value created but to also improve performance where it was not.  This step should not be under-estimated because it is the opportunity to collaborate with your team and with stakeholders to optimize impact through a performance improvement process.  For example, if evidence of impact happened but the stakeholder wants to speed up the pace of change, collaborate to adapt and adjust to make that happen.  This is truly the step where the measurement process and the performance improvement process work together.

Based on the above, we believe that both the Proof-Based approach and the Evidence-Based approach are valuable to the L&D operation and should co-exist.  The Evidence-Based approach should be a true process that is automated and integrated with the LMS and continuously collects impact data and outcome data so at anytime, L&D can tell a story of impact, value and act on the data.

However, L&D should hire experts (like a statistical causal model expert and/or a certified Phillips ROI Process expert) once in a while to do an L&D impact project/study.  These are important when a material program is under severe scrutiny or it is brand new or had a major overhaul.  This approach pressure tests impact assumptions and should be challenged periodically for validity and integrity of measurement.

The Evidence-Based approach might be your measurement solution 80% of the time whereas the Proof-Based approach is needed 20% of the time.  The Evidence-Based approach leverages technology specifically built to connect to the LMS and to also gather the right impact and outcome data.  It is cost effective, practical and repeatable for reasonable, data-driven, continuous conversations and collaborations with L&D stakeholders.  The Proof-Based approach leverages expert consultants to review historic information, gather new information and report their findings.  It is an investment in program longevity and integrity.

Performitiv has technology to automate the Evidence-Based approach and strong partnerships to accomplish the Proof-Based approach.  To learn more about these models, read about the Performitiv proprietary Performance Optimization Framework for Learning where you can download the actual framework.

Thank you,

The Performitiv Team


Blog Archives

January 17, 2019

Create and Sustain a Learning Measurement Culture

Read More ➝

January 9, 2019

Chicago Learning Impact Workshop & Networking Event

Read More ➝

January 2, 2019

Performitiv Speaking and Exhibiting at ATD TechKnowledge!

Read More ➝

December 26, 2018

Evidence or Proof of L&D Impact? Why Not Both?

Read More ➝

December 12, 2018

Learning Measurement Workshop, February 27 in Washington, DC

Read More ➝

December 11, 2018

Using Learning Scorecards to Show Impact and Action

Read More ➝