Skip to content

Skills & Workforce

How to measure training ROI with assessment data

H
Huneety Team
·April 17, 2026·4 min read
Before-and-after bar chart showing competency score growth after training investment

Where training needs come from

Training needs come from two sources cross-referenced: performance management evaluations and competency gap data from 360 assessments. Most organizations get this wrong. They build their L&D catalog from employee wishlists or engagement surveys. That produces a catalog of what people want to learn, not what the business needs them to learn.

Performance evaluations answer: where are people missing expectations? Competency gap data from 360 assessments answers: where are the skill deltas versus role requirements?

When both sources agree on a gap, you have a real training need. When only the survey shows a need, you have a preference. Budget the first. Ignore the second.

This distinction matters for ROI before any measurement begins. If you train people on skills they already have (or skills the business does not need), no amount of post-assessment will produce a positive return. The ROI was zero at intake.

Define outputs before training starts

Every training action must have an expected output defined before it starts. Not “attend the course.” Not “complete the module.” Something observable: a deliverable, a behavior change, a demonstrable skill applied to a real task.

This is exactly why the 70/20/10 model matters. The 70% channel (on the job) forces an output because the work itself is the evidence. A stretch assignment either produces a deliverable or it does not. The 10% channel (formal learning) is where most organizations skip this step and end up with completion rates instead of competency change.

On-the-job and through-others activities only produce measurable ROI if they are logged inside an individual development plan tied to the competency they target. Without that link, you have activity without attribution.

Output definition by channel

Weak outputs

  • "Attend leadership workshop"
  • "Complete the online module"
  • "Shadow a senior manager for two weeks"

Strong outputs

  • "Lead the Q3 cross-functional project and deliver the post-mortem report"
  • "Apply the framework to three real client cases and document results"
  • "Run the weekly team stand-up independently by month end"

Skills gap analysis guide

Skills gap analysis

The gap analysis pillar guide covers the full process from framework to action plan.

Read the guide

Map catalog to skill gaps

Before measuring ROI, you need to know which programs address which gaps. The catalog gap map provides this visibility.

Process
  1. Covered

    a learning program directly addresses an identified skill gap

  2. Redundant

    two or more programs cover the same gap, signaling potential spend overlap

  3. Orphan

    a program covers a skill where no gap exists, meaning the budget may be misallocated

The most valuable signal is “uncovered” gaps: skill deficits where no learning program exists at all. These are the gaps your budget has been ignoring. They need either new L&D investment or non-training interventions (stretch assignments, mentoring).

Redundant programs are not automatically bad. Two courses on the same skill may serve different proficiency levels. But if two programs target the same level of the same competency, one is wasted spend.

Orphan programs are the silent budget drain. They cover skills your workforce already has. Unless they serve onboarding or compliance purposes, they are candidates for retirement.

Before-and-after assessment cycles

The only reliable way to measure training ROI is to assess the same people on the same competencies before and after the development intervention. Satisfaction scores (“I enjoyed the workshop”) and completion rates (“42 people finished the course”) measure activity, not impact.

Run a baseline assessment before any development actions begin. Record the gap scores per person, per competency. After the development cycle completes (typically 12 months for a full re-assessment), run the same assessment again. The difference in scores is the competency movement.

This sounds obvious. In practice, most organizations skip the baseline or change the framework between cycles. Both break the comparison. Lock the competency framework for the full measurement period. If you need to update it, treat the update as a new baseline.

The skills gap analysis process produces the baseline data. Training ROI measurement is what happens after you act on that analysis.

Burden score as ROI metric

Traditional training ROI formulas (net benefit divided by cost, expressed as a percentage) require attributing financial value to skill gains. For most HR teams, that attribution is speculative. A more practical metric is aggregate gap score: affected employees multiplied by average gap depth.

After a development cycle, recalculate the aggregate gap score. The reduction is your measurable return. This is a derived metric, not an industry standard, but it uses the same unit (people times gap points) that the gap analysis already established.

Aggregate gap score change
score of 180 (60 employees x 3.0 average gap)
Before
score of 72 (60 employees x 1.2 average gap)
After
108 points (60% improvement)
Reduction
total L&D spend on programs covering that competency
Cost

Aggregate gap score reduction tells you how much organizational drag you removed, measured in people-times-gap-points. It does not require you to guess the dollar value of a proficiency point. It gives L&D leaders a defensible number to present in budget reviews.

Compare the reduction across competencies to see which investments produced the largest lift. Compare across departments to see where the same investment produced different results (which often reveals a delivery or manager-support problem, not a program quality problem).

Monitor execution, not completion

This is the biggest source of L&D budget waste. Organizations track who finished the course (completion) instead of what happened after (execution). Completion rates tell you the program ran. They say nothing about whether anyone applied what they learned.

Execution monitoring runs on a quarterly cadence, embedded into the performance management cycle. Not a separate L&D review. Managers answer “what did they apply?” alongside “what did they deliver?” in the same PMS rhythm they already follow.

Two distinct rhythms serve two purposes:

  • Full re-assessment: every 12 months (annual cycle), producing the before-and-after score comparison
  • Execution check-ins on IDP progress: every quarter, inside the PMS cycle, tracking whether development actions produced observable outputs

The trigger is the PMS, not the LMS. Completion rates can stay as an operational metric (did the program actually happen?) but they are never an impact metric.

Built for HR teams

Skills analytics

The skills analytics dashboard maps your learning catalog to actual gaps, tracks gap score reduction, and shows which programs produce competency movement.

Explore the platform

Frequently asked questions

No. Surveys capture preference, not need. Training needs come from performance reviews cross-referenced with competency gap data from [360 assessments](/platform/360-degree-assessment). When both sources agree, you have a real need. When only the survey flags it, you have a preference that should not consume L&D budget.
Two cadences. Execution monitoring happens quarterly, embedded in the PMS review: managers report what employees applied, not just what they attended. Full competency re-assessment happens annually, producing the before-and-after score comparison that proves ROI.
Score drops usually indicate one of three things: the assessment was more rigorous the second time (calibration drift), the training was not reinforced with practice, or external factors (reorganization, workload spikes) interrupted the development cycle. Investigate before concluding the program failed.
The top reason: the training was chosen from a wishlist, not from a gap. Beyond that, common causes include programs that target the wrong proficiency level, programs with no follow-up practice (formal learning without on-the-job reinforcement), and programs covering skills not in the competency framework (orphan content that cannot produce measurable score change).
For most HR teams, yes. Traditional ROI requires converting competency points to dollar value, which involves assumptions that executives will question. Aggregate gap score reduction uses the same unit (people times gap points) that the gap analysis already established. It is internally consistent and easier to defend. When feasible, track a comparable untrained cohort alongside the trained group to isolate program effect.

Need help connecting your L&D catalog to actual competency gaps? Talk to the Huneety team about setting up skills analytics and ROI tracking.

Skills & Workforce

Ready to close the gaps?

Book a demo. We'll show you how it works with your competency framework.