Competency models made sense when jobs were stable and change did not happen overnight. Today, HR teams still perform manual skills mapping, reference competencies into Excel sheets, and spend months building libraries that are outdated before they ship. The question is no longer whether your organization needs a competency framework. It is how to build a competency framework that stays alive past its first year.
The results of traditional competency projects are predictable: skills and KPI libraries on Excel sheets, manual job mappings matching roles to company skills and competencies, and other manually built frameworks. These projects often produce large libraries of competencies and KPIs for the benefit of performance management, talent acquisition, succession planning, and other HR processes. While the libraries help structure skills across the organization, a common hurdle follows: they quickly fall into obsolescence.
The traditional approach produces five structural failures. Libraries that are resource-intensive to maintain. Frameworks disconnected from the labor market. No link between assessment and learning. Declining employee engagement when competencies stop matching real expectations. And granularity so broad that 'Collaboration' means something different to every manager who reads it.
This guide covers the full build: five entry points (not one), the granularity decision most organizations skip, a step-by-step process, and the deployment models that determine whether the framework reaches people or stays in an HR folder.
A competency framework defines the observable behaviors a role requires and rates each person against those behaviors on a consistent scale. To build one: scope the frame, source behaviors from real managers, write proficiency anchors at five Dreyfus levels, build role profiles with target levels, and deploy to people through assessment campaigns and development plans.
Why traditional frameworks fail
Five structural problems have turned competency frameworks into a category that HR professionals approach with skepticism. Each one is avoidable, but only if you design against it from the start.
First, large manually-built libraries are resource-intensive. HR teams still map skills into spreadsheets, cross-reference KPIs, and maintain libraries that take months to assemble. Depending on the company size, these projects may take months or even years to come to light. For this reason alone, competency models have been reserved for large corporations that employ dedicated HR teams. In the era of automation and AI, HR resources should be spent supporting other areas of the business rather than performing time-consuming manual work.
Second, skills updates are disconnected from the job market. Once built, a framework must be updated when managers request new competencies, when job descriptions change, and before each assessment cycle. Library updates most commonly occur every time a manager cannot find a relevant KPI or competency in the existing library, at every job description update, and before performance management cycles to refresh yearly KPIs or to reflect new competencies and skills priorities decided by management.
Organizations that have not updated their competency model in three or more years face a high risk of obsolescence. Managers risk developing teams with skills and KPIs that are no longer relevant and disconnected from the external skills market. This means organizations are more likely to make blind decisions and critical errors over skills. At a time when skills are evolving fast, it is critical to build internal competency models based on both existing internal data and external job market evolutions.
Third, performance management systems do not connect to learning. HR must look at the assessment system to find a person's gaps, then manually search the LMS for matching training. The vast majority of HRMS does not automatically link the competency library to the most relevant training opportunities. The assessment is not connected to the learning catalog, making it impossible to measure whether training actually closed the gap.
Fourth, relevance decay kills engagement. Building a competency model is a significant milestone that requires resources and commitment from all stakeholders involved. In some companies, competency levels match job positions, each skill level presenting its own definition. Evolving outside of the model is extremely complicated in these organizations, where competencies and levels have been fixed by management.
When competencies do not match the real expectations of the job, employees evolve outside the proposed framework. They take courses on their own because the proposed competency is not relevant to their roles. When employees do not trust the system, any advancement in 'competency level' becomes a reason to re-negotiate salary rather than a development signal.
Fifth, granularity fails at the surface level. Most frameworks stay at a competency label ('Collaboration and Teamwork') without defining what that means in observable terms. Without sub-behaviors, a Dreyfus rating on 'Collaboration' means different things to different managers. The primary intent of the framework is to build a standardized system for development. Because of its rigid nature, the traditional framework does not take into account complex granularity and skills synonyms.
Five entry points to a framework
There is no single right way to build a competency framework. The entry point depends on what your organization already has and what you are trying to achieve. Five approaches cover the range.
Start from job descriptions
Job descriptions already contain the behaviors your organization values, expressed in the vocabulary your managers use. This is the recommended entry point because it avoids drift: the framework stays anchored to real roles rather than abstract ideals.
The process: collect JDs for your target roles, extract competencies using AI or manual review, map them to a standard taxonomy for comparability, and have HR review the output. AI can generate a first-draft framework from a set of job descriptions in minutes. HR edits the vocabulary and validates the behavioral anchors. This combination produces a framework grounded in what your organization actually does, not what a consulting framework says you should do.
Core competencies from values
Every organization has values. Integrity, customer focus, accountability. The question is whether those values have been translated into observable behaviors that can be assessed.
Core competencies are global: they apply to every employee regardless of role. They define the behavioral floor of the organization. A core competency like 'Collaboration and Teamwork' becomes operational only when broken down into concrete sub-behaviors: giving credit to others, involving others in decisions that affect them, placing team needs above individual needs, working with others toward common goals.
When deploying core competencies, set the scope to 'core' so that every current and future role profile automatically includes them. This is the foundation layer. Role-specific competencies are layered on top.
Leadership competencies for managers
Different employee groups need different competency sets. Leadership competencies (coaching, delegation, strategic thinking, team development) apply to the manager population and above. Specialist roles do not need 'Manages team performance' in their profile.
Deploy leadership competencies at the manager career level. The system auto-includes higher levels: directors and executives get leadership competencies automatically, plus any additional ones set at their own level. This creates a layered framework: core competencies for everyone, leadership for managers and above, strategic for directors and above.
AI-generate from industry data
No existing JDs? No existing framework? AI generates a competency framework from your industry and role descriptions using a master taxonomy of 1,700+ pre-built skills across 12+ industries. The output is a first draft: root categories, competencies with definitions, and behavioral anchors at five proficiency levels. HR reviews, edits vocabulary, and validates against the organization's actual needs. The AI output is the starting point, not the finished product.
Mix your framework with standard data
Most mid-sized organizations land here. You have an existing framework (partial or complete) with vocabulary your managers already use. Import it, map it to a standard taxonomy for analytics comparability, and fill gaps from the pre-built library.
The translation model preserves your language while making cross-organization benchmarking possible. Your managers keep calling it 'Client Relationship Management.' The system knows it maps to 'Stakeholder Communication.' Both are correct. Neither needs to change.
Before choosing an entry point, review 2 to 3 competency framework examples from your industry and 1 to 2 from adjacent industries. The goal is not to copy. It is to understand which competencies appear consistently (those are likely core) and which are unique to specific roles (those are functional). Spending more than a week on benchmarking usually means you are avoiding the harder work of writing your own definitions.
Complete guide
Competency mapping: the complete process
The 4 building blocks, 5-step process, framework methodologies, and 6 mistakes that kill frameworks.
Read the guide →
Choose your granularity
A competency framework can operate at two levels of depth. The choice affects assessment time, gap data richness, and development plan specificity. Most organizations do not discuss this choice explicitly and default to competency-level, which is not always the right answer.
Competency-level assessment rates each competency as a whole on the Dreyfus 0-5 scale. It is faster per assessment, simpler to calibrate across managers, and sufficient for organizations running their first assessment cycle. The trade-off: gap data is broad. You know someone is a level 2 on 'Collaboration,' but you do not know which specific behaviors are the gap.
Sub-skill level assessment rates the observable behaviors underneath each competency. It takes longer per assessment but produces specific gap data. When the gap is 'does not involve others in decisions that affect them,' the development action writes itself. When the gap is 'Collaboration is low,' it does not.
Are you developing 'Effective Communication' or 'Influential Communication'? Traditional competency models fail to manage this complexity. Most companies stay at a surface level, using competencies that are too broad to be clearly understood by employees. To be effective, competencies must be broken down into practical sub-behaviors that align strategically with the company's goals.
- Giving credit to others for their contributions
- Involving others in making decisions that affect them
- Placing team needs above individual needs when priorities conflict
- Working with others toward common goals across functions
Supporting skills
The toggle is a workspace setting, not a per-assessment decision. Organizations running their first cycle should start at competency level and move to sub-skills in the second or third cycle, once managers are calibrated on the Dreyfus scale and the behavioral anchors are validated.
The five steps
The process is the same regardless of entry point. Sequence matters: skipping the scoping step is how frameworks grow to 120 unrelated competencies.
Scope the frame
Start with 2 to 4 competency families (Leadership, Technical Mastery, Business Acumen, Interpersonal). Families are the scaffold. Individual competencies come later.
If you are not working in a top-tier company with a dedicated OD team, do not attempt a full-organization rollout on the first pass. Focus on a specific competency group (Leadership or Core Competencies) or target a specific population (succession candidates, high performers, high potentials). Expand gradually to other business units until the framework covers the entire organization.
It is recommended to build a taxonomy model with a dedicated task force including HR, top management, and topic experts. Some competency models have been built without concertation, where one person had to decide the company's skills for the years to come. This produces a framework that may be technically correct but culturally disconnected.
Source behaviors from managers
Run 5 to 8 interviews with high-performing managers per competency family. Ask what 'good' looks like and what 'great' looks like for their best direct reports. The answers become behavioral anchor statements. This step is non-negotiable: a framework built by HR alone, without manager input, fails on contact with reality. Co-building with managers from the affected functions means their vocabulary goes into the anchors, and they become the internal champions.
Write proficiency anchors
One sentence per Dreyfus level per competency. Level 2 (Competent) describes what advanced beginners do with supervision. Level 4 (Proficient) describes fully responsible contributors. The gap between levels must feel real to the people being assessed. If two managers cannot watch the same meeting and land on similar scores, the anchor is too abstract.
This is where AI helps most. Generating behavioral anchors at five levels for 20 competencies takes weeks manually. AI produces the first draft in minutes. HR edits the language and validates against real workplace observations. The editing cannot be skipped. The drafting is what AI genuinely accelerates.
Assign to role profiles
For each role, select the competencies that apply and assign a target proficiency level. Not every role needs every competency. Role caps keep frameworks usable: 8 competencies for specialists and individual contributors, 10 for managers, 12 for directors. Plus a floor of 3 behavioral competencies per role.
Role profiles are where the framework becomes operational. Without them, competencies are a list of words. With them, competencies become assessment criteria with measurable targets. A person is not 'good at Communication.' They are a level 3 on Stakeholder Communication against a target of level 4, with a gap of one level that maps to specific development actions.
The behavioral vs technical split matters at this step. Each role profile should maintain a minimum of one third behavioral competencies (collaboration, decision-making, coaching) alongside technical ones. The behavioral floor predicts performance across roles. Technical competencies are role-specific and trainable. Leaning too far toward technical creates a skills taxonomy rather than a competency framework.
Pilot on one department
Run a full cycle (360 assessment, calibration, gap review, IDP generation) with one department before company-wide rollout. Expect to revise 10 to 20 percent of the behavioral anchors after pilot feedback. That is the framework learning its own organization.
Scope the frame
Define 2-4 competency families. Start with one population or competency group.
Source behaviors
Interview 5-8 high-performing managers per family. Capture what 'good' and 'great' look like.
Write proficiency anchors
One sentence per Dreyfus level per competency. AI drafts, HR edits.
Assign to role profiles
Select competencies per role. Set target levels. Cap at 8/10/12 per role.
Pilot on one department
Full assessment cycle. Revise 10-20% of anchors from feedback.
Deployment decides everything
A competency framework that is built, documented, and filed is an HR initiative. A framework that is deployed to people through role profiles and assessment campaigns is infrastructure. The difference is not quality. It is deployment.
Three deployment scopes determine how competencies reach employees.
Core deployment applies a competency to every role in the organization. Core competencies come from organizational values: integrity, collaboration, customer focus. They define the behavioral floor. When you add a new role six months from now, core competencies are already assigned. No manual step required.
Level deployment applies a competency at a specific career level and above. Leadership competencies deploy at the manager level. Strategic thinking deploys at the director level. The system auto-includes higher levels: deploy 'coaching' at the manager level, and directors get it automatically.
Push deployment applies a competency to specific selected roles. Technical competencies for the compliance team. Industry-specific certifications for the clinical staff. One-time assignment, no persistent rule.
Deployment to role profiles is only the first connection. The full cycle links the framework to four downstream systems that close the loop between assessment and development.
- Assessment campaigns: role profiles become the assessment questionnaire. Each competency in the profile is rated by self, manager, peers, and direct reports.
- Gap analysis: target proficiency minus current proficiency equals the gap. The gap is the signal, not the assessment score itself.
- Individual development plans: gaps generate 70/20/10 development actions: on-the-job stretch assignments, mentoring from internal experts scoring 4 or above on the same competency, and formal training matched to the specific competency gap.
- L&D catalog integration: map your existing training catalog to competencies. When a gap is identified, the system surfaces matching courses and workshops from your catalog before recommending new investment. This shows which gaps have training coverage and which need new programs.
- Re-assessment: run the next cycle. Measure whether gaps closed. The metric is competency score movement between cycles, not training completion. A course that was completed but did not move the score is a course that did not work.
This is where the traditional model breaks down most visibly. In most multinational companies today, HR must look at the performance management system to understand what skills have been assigned to a person, then manually look into the company's LMS to try and match skills to the right learning activity. The performance management system is not connected to the LMS. A deployed framework with integrated L&D closes this gap.
Built but not deployed
- Competencies defined in a document or spreadsheet
- No connection to assessment campaigns
- Managers unaware of the framework vocabulary
- L&D spend disconnected from competency gaps
- No measurable progress between cycles
Deployed to people
- Competencies assigned to role profiles with target levels
- Assessment campaigns auto-generated from role profiles
- Gaps visible per person, team, and department
- L&D catalog mapped to competency gaps
- Re-assessment measures whether gaps closed
Built for HR teams
Build your framework with 1,700+ pre-built skills
Import your own vocabulary, build manually, or let Huna AI generate from job descriptions. 20 root categories, 300+ competencies, deployed to roles in one click.
Explore the platform →
Keep the framework alive
A competency framework is a living document. Treating it as finished is the single most common cause of adoption decay.
Review after every assessment cycle, not annually. Each cycle produces signals: which behavioral anchors generated inconsistent ratings across managers, which competencies produced no useful gap data, which role profiles had competencies that managers marked as irrelevant. These signals drive 10 to 20 percent anchor revision per cycle. The revision is expected and healthy.
Monitor the connection between the framework and the labor market. When new job descriptions diverge from the existing framework, the framework needs updating. AI can detect this drift by comparing incoming JDs against the current taxonomy and flagging competencies that appear in new roles but not in the framework.
The revision cadence matters. Minor revisions after every review cycle: 10 to 20 percent of behavioral anchors adjusted based on rater feedback and calibration data. Major revisions every 2 to 3 years, or when the business strategy changes materially. The framework is a living document. Treating it as finished is how competency projects become laminated wall charts that managers silently stop using.
The Venn diagram at the start of this article describes the target state: a framework connected to both the labor market and the people it serves. When the three circles overlap, competency gaps are identified, new skills are updated from the market, and development opportunities are deployed to job families. Disconnected from either, the framework becomes an artifact of the year it was written.
Frequently asked questions
If your HR team needs to build a competency framework from job descriptions or validate an existing one against current market data, talk to the team about your requirements.