Meetings About Learning Metrics Should be Constructive Not Controversial

Last year our organization, Performitiv, had the opportunity to speak to hundreds of learning leaders about their current learning measurement process. Many insights came from these conversations such as making learning evaluations less cumbersome and complex for participants to complete. Another was making reporting focused more on the user and their needs and having less vs. more reports to do so.

However, one concerning need came out in these meetings. It seems conversations with business stakeholders (sales, marketing, supply chain, operations, IT, HR, finance, etc) were becoming difficult. In fact, the words learning leaders described were ‘defensive’, ’emotional’ and the one that was most mentioned was ‘controversial.’

When engaging with stakeholders, learning measurement should be a helpful process that fosters continuous performance improvement. So it caught our attention when the word ‘controversial’ was mentioned multiple times. In probing further we found that the root cause of the ‘controversy’ involved the following factors:

1) The conversation focused on the great job L&D did despite the business metrics under-achieving.

2) The conversation attempted to link learning to the business result but there were too many factors involved so it was difficult to do (outside of an in-depth impact study or causal analysis where it can be done better).

3) When return on investment was mentioned it raised skepticism as it tended to be out of alignment with other metrics the business had seen (again unless this was in the form of a deeper impact study or causal analysis where it could be more reliable).

So the question is how to do this in a way that is constructive not controversial? Below are some tips to do this.

1. Present your metrics in the language of the stakeholder. Avoid using learning terms like Level 1 or Level 2. Using the Talent Development Reporting Principles categories like Efficiency, Effectiveness and Outcomes are terms that may resonate better.

2. Showcase business metrics and other operational metrics important to the stakeholder on the report. For example a completion rate or sales metric if those are important. This is really important because you want to have the metrics the business stakeholder cares about on the scorecard to be part of the story and discussion. For example, if sales goal attainment was important, put it on the scorecard. Even if you’re not proving how much learning impacted this goal, it should be part of the discussion.

3. Provide indicators of how the learning aligned to the business results but don’t portray that data as correlative or causal. Where feasible, a predictive measure that is roughly reasonable is okay and sufficient. If you gathered information on evaluations that showcased alignment to a business objective that is important to the conversation. The stakeholder is not likely expecting a perfect causal analysis. However, they would like to know if there was some alignment between the learning and the results. It is a healthy and constructive conversation when this appears in conjunction with the business metrics of interest to the stakeholder.

4. Most importantly, showcase what can be done to act on the data, and do it. This is where the majority of the conversation should be. It is where the conversation is constructive and not controversial. In every facet of business, stakeholders almost always compare an actual to a goal and look to their business partner to help them act on areas where performance is below goal. This should be no different. If sales goal attainment was not at goal, don’t continue to tell the stakeholder how great your Level 1 scores were or how you think the impact from learning was high and the business should have been better. Rather, put an action plan in place like operations people do when this happens. Perhaps the L&D team creates a 60 second video that pinpoints what the sales team can do to work toward goals. In addition, the sales management agrees to go on some sales calls with the field sales team to highlight specific tasks the sales rep can do to work toward these goals. The next time the measurement occurs (monthly or quarterly) perhaps the measure is moving in a more positive direction because collaborative action was taken on the metric.

5. When needed, do an impact study or causal analysis. If the program is strategic, visible, or costly consider putting in this extra effort to enhance the conversation. An impact study or causal analysis may be warranted once or twice a year. Just be sure the stakeholder wants and values this data before doing it. This doesn’t mean that a deep dive is to be done on all programs, in all stakeholder meetings. That is not practical or repeatable. But do more when the nature of the program and stakeholder environment warrant it.

In the end, avoid controversy and choose a constructive, collaborative conversation with stakeholders that include 1) metrics they care about, 2) reasonable alignment of learning to those metrics, and 3) a focus to act on the data to improve performance vs. debate/justify how much value L&D add/did not add.

Want to learn more about how to have constructive conversations on your learning programs with stakeholders? Contact us at info@performitiv.com.

The Performitiv Team

Blog Archives

January 7, 2021

Virtual Learning Optimization Workshop

Read More ➝

December 28, 2020

The Pocket Learning Measurement Strategy

Read More ➝

December 21, 2020

Save the Date – March 9/10

Read More ➝

December 15, 2020

Virtual Learning Analytics Summit

Read More ➝

December 14, 2020

Diagnose Gaps in Learning Measurement

Read More ➝

December 7, 2020

Performitiv’s Latest Release

Read More ➝