At Performitiv, we have the honor of speaking to many diverse organizations about their learning measurement process. While they differ in maturity, the successful and sustainable ones are simple. This post is about making learning measurement simpler. If you do that you'll be making a smart decision.
Simple does not equate to unsophisticated or invalid or unreliable. Simple means that learning measurement is a scalable process, not an episodic project. Simple means that users of the information can feel comfortable and confident in the data, so that they use it in empowering and engaging ways. Simple means that you can have collaborative and constructive stakeholder conversations about the story of learning impact, as opposed to complicated and controversial meetings.
So how do leading learning organizations simplify learning measurement but keep it a smart, meaningful and worthwhile process? Here are a few strategies to doing this.
First, start with the evaluations and make them better. Make them concise, articulate and useful. Keep them to 10 or fewer questions that ask questions around value, alignment and application, not just about training quality. Make these evaluations a standard, such that you don't have dozens of custom evaluations, but one standard whereby there may be a few adaptations, but still about 80% standard.
Next, build one or two conditions into your standard evaluations to collect evidence of impact on key programs. For example, if a stakeholder wants to understand the impact a program has on employee engagement, ask a conditional question just for that program to understand how motivated and committed the learners are to the organization as a result of the program. This is an outcome indicator and can help tell the story of impact without a significant outlay of resources to do so.
If you have an LMS and it has an API library, build a connector to it so the measurement platform can automatically distribute and collect evaluations. This is safe, reliable and saves significant time and resources.
On the reporting side, don't build dozens of reports. Think about your primary audiences in L&D and build a report suite for them. Our research shows there are tactical users, such as instructors and designers, who need to see instant evaluation results down to the question and comment level. We have also identified L&D program managers, and they need to slice evaluation results by tags, like modality, instructor, location and course, so build a report to do that. Finally there are stakeholders and leaders. They need a simple scorecard to converse about the impact of the learning.
Speaking of stakeholders, these days there seems to be heightened emphasis on complicated dashboards that look pretty but may not be great for conversations with senior people with limited time. Let's use this as an opportunity to get back to the basics of stakeholder reporting. The basics don't involve dashboards, but rather a simple scorecard that has not dozens but a small, balanced set of metrics, and each metric shows an actual result compared to a goal and then the variance between these. Color coding is used on these to assist in understanding if a metric exceeds, meets or did not meet goal. Finally, a trend line is very helpful in showing the stakeholder if the track record of this metric has been improving or declining.
In the end, learning measurement should be built as a sustainable process, not a one-off project. The process needs to be repeatable, so keeping it simple is a best practice that leading organizations use to scale learning measurement, but also sustain a culture that is data driven because the process is easy to implement and maintain over time.
Do you want to learn about our technology, a simple tool that can help you measure learning effectiveness and tell your story of impact? Contact us to hear why we're the fastest growing learning measurement tool in the market.
Thank you,
The Performitiv Team