There was a time when the only metrics requested from learning and development officials were the number of people taking part in the training and the cost involved. In other words: basic effectiveness and efficiency.
As with everything, however, learning and development has evolved. L&D is now a business critical change agent. It’s not enough to measure inputs, the number of courses, and attendance. Learning and development must look at the output and outcomes.
“We’re in the process of trying to become a learning organization, and to become a learning organization you have to be nimble. You have to have a culture of leaders as teachers. You have to have a culture of recognizing those things that contribute, and actually those things that lead to success,” Brad Samargya said. Samargya is the Chief Learning Officer for mobile phone maker Ericsson.
When looking at learning programs, the first question must be focused on whether or not the program is effective and how its efficacy is impacting the return on investment.
The solution here is to mine data for program effectiveness and analyze its outcomes. From here, strategies should increase effectiveness through potential changes in learning technology.
Samantha Hammock is the Chief Learning Officer for American Express. Her company employs a learning management system as part of their learning process. Hammock says measurement is the company’s biggest need.
“If we’re going to mandate training, we had better be robust in tracking and reporting. Is the experience getting better. Is the knowledge increasing. We have put it through workforce analytics to slice and dice some of those metrics,” Hammock said.
But you can’t simply use all the data mined from a program. You need to determine whether or not it is good data.
How do HR learning professionals accomplish that goal?
Models of Evaluation
The benefits of learning and development to a business are invaluable when you consider the impact on jobs within the company and the business results. For learning professionals wanting to predict outputs and outcomes, consider these models.
Courtesy: Stock Photo Secrets
Four-Level Training Evaluation Model
First, the late Dr. Donald Kirkpatrick’s Four-Level Training Evaluation Model. It can help learning professionals in the human resources space objectively analyze effectiveness and impact of the training. This leads to ways to improve it.
Dr. Kirkpatrick, who was a Professor Emeritus of the University of Wisconsin when he died, first published the model in 1959. It’s made up of four levels.
- Reactions
- Learning
- Behavior
- Results
An explanation of each follows. Note: the word ‘trainees’ will be synonymous with employees.
Reactions measure how your trainees react to the training. This is important because it helps you understand how well the training is being received. It also aids in the prediction of success as the program continues including the identification of areas missing from the training.
Learning measures what trainees are actually learning. When outlining the program, no doubt a series of learning objectives were set. This metric allows learning professionals to understand what trainees are learning and what they aren’t. It further allows for prediction when it comes to improvement.
Behavior measures how the behavior of trainees has changed since the training began. More specifically, it looks at how the trainees are applying the information. In terms of prediction, this particular metric can be difficult to apply. If a behavior doesn’t change, that in no way means the trainee hasn’t learned the knowledge. It may simply means the individual chooses not to apply it,
Results measure the outcome of the training. Using this measurement allows for further prediction of learning and business outcomes. It also allows for the adjustment of training to reach a more desired outcome especially when considering a company’s bottom line.
Predictive Learning Impact Model
Another model to consider would be the Predictive Learning Impact Model.
Source: Diagnosing Key Drivers of Job Impact and Business Results Attributable to Training at the Defense Acquisition University by Nick Bontis, Chris Hardy and John R. Mattox (2011)
The diagram above shows the links between learning and job impact. Furthermore, there is a strong link between job impact and business results.
In order for a learning program based on this model to be successful and lead to a strong impact on jobs, three critical components are needed.
- Buy-in
- Course quality
- Instructor effectiveness
An explanation of each is required.
Worthwile Investment focuses on the trainees’ perception the training is a worthwhile investment when thinking about their individual careers or employer. Of the three components, this particular one is of supreme importance. Without it, the others are not as effective. A learner who does not buy-in to the training is less likely to make it personal. Using this measurement yields what part of the program is of quality and what is wasted.
Courseware Quality has a high impact on learning, more so than the effectiveness of the instructor. Providing content and material critical to the role of the learner will maximize the buy-in.
Lastly, instructor effectiveness. Like the previous example, instructor effectiveness has a dual impact on learning and buy-in. An instructor who is captivating, engaging, and offers real examples and quality coursework will increase the chances of success. The opposite is true of an instructor who posses none of those qualities.
While it may seem unclear how these models address the issue, the reality is these are a means to an end. These models help ensure the data mined from a learning program’s effectiveness is quality and ready for analysis.
Bob Dick is the manager of instructional technology for Humana. Dick says the company, in addition to measuring success, measures failures. This helps predict and ensure the rate of investment.
Learning Metrics: Don’t Wait for the Game to End Before Determining the Score
Contributor: Bob Dick, Manager of Instructional Technology, Humana, Inc.
One consequence needs to be acknowledged before you begin: if a clearly defined and measurable finish line of your curriculum is challenging and fairly written, there are still going to be a percentage of your learners who fail to reach that finish line. A good learning organization has a solid idea going in of what this fail (yes – I said "fail") percentage will be, and has an agreed upon remediation plan in place, ready to fire when and as people fail. Within my learning organization, this process is where we generate the real results, and eventually the real ROI (as measured through time to proficiency, quality metrics and retention statistics of the people we train).
As educators, we cannot reasonably expect that we can take corrective steps in a course simply by looking at test scores at the end of a course. Testing throughout training must be done early and often. Our standard is that every 30 minutes while training, some form of assessment is given. It is critical to note that this does NOT mean that every 30 minutes we stop the training and send everyone to the LMS to take a test, nor does it mean that every test is or should be formally graded. Very often these tests (which may be as simple as a quick Q&A session between learners and facilitator) only purpose is to level set between learner and facilitator as to what level of understanding has been achieved, and whether or not it is prudent to move forward. This is a somewhat informal process that relies on the skill and professionalism of the learning professional in the classroom. Our work kicks in every time a scored test is given.
As mentioned above, waiting for the game to end before determining the score is not a particularly effective strategy. Within my organization, our first level report – run several times throughout the day – identifies how many people have taken a given test, how many have passed, how many failed, and the average score. These results are then compared to the agreed upon standards for success. All well-written tests will have SOME "failure" – we are looking for instances where the level of failure exceeds what was expected. In instances where failure rates are higher than expected or average scores lower than expected, it is critical to the success of the training that this be identified as quickly as possible by the support team so that actionable feedback can be provided to the learning facilitator while the situation is still correctable.
At the point where failure is identified several pieces of data are collected and distributed to our learning managers, curriculum manager, and our instructional designers. These data points include:
- Are the results unusual compared to other training efforts using the same material?
- Are there specific questions that were missed more often than others (our standard is anything where less than 85 percent of our learners were successful)?
- Is there anything that would indicate technical issues prevented a successful outcome?
To the first bullet: If we have an instance where a class is underperforming when compared to other classes, it is possible that we need to provide coaching to the facilitator. However, it can also be an indicator that a system or process change has made the answer to a question or questions open to interpretation, or inaccurate. Until the second bullet is fully addressed, no real conclusion can be drawn on the first.
Expanding the second bullet: Once the accuracy of the question and answer have been established, the instructional designer and manager work together to ensure: that problem questions are worded fairly; that distractors are reasonable; that the questions themselves are relevant to the competencies we expect the learner to come away from the class with; and that the questions can be directly tied to a specific point in the curriculum, as well as a specific objective of the training. Once all this analysis is complete, we can then provide specific, focused and actionable steps to our instructors in such a time frame as to be impactful.
While these steps certainly add time to our overall process, the results are undeniable. Pass rates are up nearly 20 percent in the last year and average assessment scores are up over 5 percent over the same period. Quality scores, retention and time to proficiency during our recent ramp up in support of our major line of business (4000 new associates onboarded in a three month window) all exceeded expectations.
And the jump from those numbers to ROI – is simple math.
Conclusion
For HR professionals in the learning space, the mission is clear: move the needle when it comes to the efficacy of learning programs. Properly evaluating the company’s programs and then leaning on the data received will certainly lead to discovery and, ultimately, the ability to drive desired outcomes and ensure a positive future of work.