Three Categories to Simplify L&D Measurement

Simplify your measurement strategy with these three categories.

I remember staring at the goal planted squarely under my name in Workday with complete overwhelm, "Create an organized and repeatable L&D measurement strategy." We were already into Q2, time was ticking, and I was procrastinating.

I was working at a smaller, rapidly growing employee benefits company. We had already made many changes to our strategy for L&D including re-organizing the team, putting together a playbook to outline our standards and practices, and starting to work differently with stakeholders. We had some rudimentary metrics in place, but I knew we were nowhere close to what we needed to achieve our overall goal of working as a strategic partner to the business.

What I already knew about measurement.

I backed up and started with what I already knew, knowing this needed to be the preliminary foundation for the measurement strategy.

  1. The purpose of the L&D measurement strategy, just like the measurement strategy for every other team in the company, is to make informed business decisions. We weren't measuring to justify our existence, but to provide data we could use to improve our work.

  2. We needed to make sure that we used any data we collected. Drilled into my head by a mentor earlier in my career, measurement without practical application is a waste of time for everyone involved.

I knew the direction we needed to go, why we needed to go there and to be careful not to create measurement for the sake of measurement alone. But I still needed to wrap my brain around how to get it all organized. Like most L&D teams, we ran a large number of programs for many different business units. We continuously created content aimed at a variety of audiences throughout the company. There was a lot to measure!

Three categories of measurement for L&D.

That's when I discovered three categories of measurement: efficiency, effectiveness, and outcomes. I found the categories within the pages of the book, Measurement Demystified: Creating Your L&D Measurement, Analytics, and Reporting Strategy. That was it! I knew these categories could both organize the data we gathered and illuminate any gaps in our strategy.

Even better, the categories aligned to parts of both the Kirkpatrick and Phillips models of evaluation. Models that I had used in the past but admittedly, still made my brain spin a bit when I dove in. With this little simplification, breaking all measurement into three categories, Kirkpatrick and Phillips made more sense. As did the idea that an overall measurement strategy needs all three of these categories, but they don't all need to be equally represented.

MEASUREMENT CATEGORY #1: Efficiency

Efficiency measures answer questions like, "How many? How much? How often?" In other circles I have heard them referred to as "activity measures."

Most L&D teams are collecting efficiency measures, whether or not they are regularly reporting them. They usually include items like the number of people who participated in training, enrolled in online courses, or clicked on learning articles. They may also include some metrics related to team efficiency such as how many hours team members spent facilitating, how many new learning courses/videos were completed each quarter, how much it costs to create a new course or train in a new employee, etc.

In our company, we had these down and we were even reporting many of them regularly. But they really weren't organized into a repeatable process. We also weren't doing much else with the data, meaning we weren't analyzing it to make business decisions about what was working and what wasn't. So, they didn't align to the direction and foundation for our measurement strategy. Lucky for me, organization and analysis was an easier fix than trying to figure out what to measure.

MEASUREMENT CATEGORY #2: Effectiveness

Effectiveness measures are focused on Kirkpatrick and Phillips levels 2 and 3, learning and behavior change. These measures tell us whether or not what we are doing is helping people learn and change their behavior.

Oftentimes these measures are tied to performance metrics that L&D doesn't own but does need to monitor based on the problems we are attempting to solve. For example, if we are trying to help improve quality scores for contact center agents, we are going to monitor the quality scores (the performance metric in this case) both before and after participation in the learning experience we design. The owner of these metrics is the contact center, and we need to establish some collaborative sharing of the information with the key stakeholders in that area.

Effectiveness measures can also be gathered through surveys/evaluations related to the program(s) we administer. These should be designed to ask specifically about whether participants learned the information (acquired new knowledge) and whether their behavior changed as a result. Without related performance metrics we can ask participants to self-report and/or we can send the survey to the participants' managers to report on whether they have observed learning and application of new skills and knowledge. If the learning team is administering the surveys/evaluations, they own and monitor the data, sharing results with stakeholders.

In my case, we had goals to improve performance metrics, but we either hadn't fully defined the metrics we were watching, or we hadn't set up processes for teams to share metrics with us. We also regularly administered a survey upon completion of new employee onboarding that was designed to capture data on learning and application. But again, we weren't using this information to make business decisions and we didn't report it regularly to stakeholders. Another area where we didn't need to create anything new. Instead, we needed to get better at using and communicating what we had and also partner more closely with specific business units so that we could access the metrics they were tracking.

MEASUREMENT CATEGORY #3: Outcomes

Outcome measures are those that show impact on the overall business goals and initiatives. They are tied to Kirkpatrick level 4 and Phillips level 4 and 5. This is the category where most L&D professionals get stuck. According to this year's LinkedIn Learning report only 4% of large scale initiatives are measuring impact.

If you and your team aren't included in that 4%, don't panic. Per Measurement Demystified, we don't need to have outcome measures for everything.

Let me repeat that for the people in the back. We don't need to have outcome measures for everything we do. Trying to find them for everything creates unnecessary work, and subsequently, unnecessary stress.

Outcome measures should focus on the biggest and most important initiatives. The ones that are aligned to the overall company's strategy. That means your daily tasks like coordinating trainings in the LMS and maintaining content accuracy don't need outcome measures attached, unless improving those items is part of a larger company strategy at the current time.

For example, let's say that one of the current company initiatives is to improve efficiency of processes and reduce expenses. The L&D team aligns with this initiative by setting a goal to reduce onboarding expenses. Because this goal is tied to a larger strategic initiative, it is worthy of the time and effort needed to measure the outcome. The L&D team puts measurement in place to determine current expenses and then, based on the steps taken to work towards the goal, shows the decrease in expenses as a result. They then report this decrease as part of our contribution to the overall company initiative. Outcome measure complete!

In my case, we had three major projects that year tied to overall company initiatives. Each of those projects also had some efficiency and effectiveness measures, but we also added the layer of creating a way to measure progress and outcome as related to the overall initiatives. This was where our outcome measures started and stopped. Yes, we attached and monitored efficiency and effectiveness measures to the other, daily work that our team conducted, but not outcome measures. The difference being that the daily work wasn't related to a strategic initiative, the major projects were.

Using the categories to create your measurement strategy.

To get started using these three categories, begin with a simple audit of what already exists, sorting it into the appropriate category.

  • What are you already measuring? What category does it fit into (efficiency, effectiveness, or outcome)?

  • What performance measures (not owned by L&D) are impacted by the work you and your team complete? How can you ensure regular access to those measures via a system and/or communication with the owners?

  • What projects are you doing that tie to your company's overall strategic initiatives? Do you have measures in place to determine impact?

Once you have completed your audit, start asking questions about the measures that exist and don't exist.

  • Do you have measures in all three categories? Which category/categories are lacking? What might be needed to populate them?

  • Does every measure help you to make informed business decisions? If not, you may no longer need to spend time and effort gathering that measure. Don't spend time to measure for the sake of measurement.

  • Do you have systems/processes in place to regularly gather and analyze the measures you need?

When I completed my audit and asked questions years ago, I discovered that we had access to more data, and we were gathering more data than I thought. It just wasn't organized or analyzed. I also discovered we were great at activity measures, so much so that we could back off of a few that we were regularly monitoring because they didn't help us to make business decisions. We were marginal at effectiveness measures, especially when it came to tying back to performance data, so we had to better partner with stakeholders. Thinking about outcome measures had me overwhelmed, but when I reframed to tie them only to strategic initiatives, they became simplified. Thanks to these three categories, I was able to create a robust measurement strategy that was realistic and completely do-able, even within our small team!

Previous
Previous

Combatting Stakeholder Silence

Next
Next

Things We Can't Out-Train Part 4: Unclear Expectations