Learning Evaluation Rationale

Overview

This post provides the rationale behind the Performitiv standard learning evaluations.  Performitiv learning evaluations may also be customized or organizations may create their own evaluations using the Performitiv evaluation authoring tool.  This document shares insight behind the Performitiv standard evaluations.

Background

The Performitiv team is comprised of thought leaders with collectively over 50 years of measurement and performance improvement experience.

The Performitiv team is well versed in measurement methodology such as Kirkpatrick in L&D evaluation and Fred Reichheld in NPS improvement methodology.

Analysis

Overtime as research has been conducted on evaluation data, conclusions have been reached that multiple questions on evaluations categorized underneath the same header (i.e. question category) tended to be similar in response and led to the conclusion that significantly more questions on evaluations may not lead to further statistical relevance or analytical insight.  Conversely, further research shows that more questions may have a negative impact on response rates and integrity of responses themselves.  As a result, surveying at the end of a learning experience with a concise survey has a better likelihood of increased data usage for performance improvement purposes.

Traditional Learning Methodology

The most widely adopted learning measurement methodology is the Kirkpatrick Learning Levels model created by Dr. Don Kirkpatrick.  The major headers for learning evaluation include the following:

Reaction (aligned to traditional Level 1)

Learning (aligned to traditional Level 2)

Behavior (aligned to traditional Level 3)

Results (aligned to traditional Level 4)

In addition to evaluating a Learner reaction against the aforementioned constructs within close proximity of the learning experience, it is also useful to gather feedback sometime after when the learner is on-the-job. This would be done as follows:

Enablers/Barriers

Impact (aligned to traditional Level 3)

Results (aligned to traditional Level 4)

Net Promoter System (NPS)

NPS was introduced as a measurement of customer satisfaction but has applicability to learner satisfaction as well.  As a result, using the NPS methodology, on both of the aforementioned evaluations can provide a proxy for overall performance from a learning intervention.  The first being more predictive then the second.  NPS is also used in other areas of organizational performance measurement and management and L&D’s use of it can align to these broader organizational measurement initiatives.

Evaluation Environment

In today’s environment there are three trends 1) increasing use of mobile data collection 2) an over-surveyed employee base and 3) the reality that practitioners don’t analyze the vast majority of data collected.  This leads to a need for articulate, concise evaluations supporting the constructs above and NPS to yield an evaluation that is both methodology-aligned yet reasonable to collect and analyze / report on in today’s evaluation environment.

In addition, more focus must be on performance improvement vs. measurement and reporting.  Extensive resources and the best use of such resources is on the improvement vs. the measurement and reporting. Today’s environment is less about the data and analytics and more about improvement and change. 

Conclusion:

The Performitiv standard learning evaluations have the following elements behind them for L&D practitioners to consider when using them.

-Over a half century of experience in measurement and performance improvement.

-Consideration of historic trends in data collection and reporting.

-Alignment to widely adopted learning evaluation models

-Use of widely adopted performance improvement models.

-Consideration of the current evaluation environment.

-Organizations have the choice to use, edit or replace the Performitiv standard evaluations when using the Performitiv software.

-Organizations may download raw data collected within the Performitiv software for analysis outside of the software.

Interested in learning more about these evaluations?  Contact us at info@performitiv.com and we’d be happy to share more about them while learning about your learning measurement process too.

Blog Archives

December 1, 2018

Learning Impact is Important No Matter How You Slice It

Read More ➝

November 15, 2018

Performitiv – Come See Us!

Read More ➝

November 7, 2018

Learning 2018 – Performance and Impact Matter!

Read More ➝

November 1, 2018

Learning Measurement, It’s Top of Mind

Read More ➝

October 31, 2018

Performance Optimization Framework for Learning

Read More ➝

October 26, 2018

Performitiv Presenting at Learning 2018!

Read More ➝