Skip to content
COMPETENCY MAPPING

Soft skills vs hard skills: how to structure both in one framework

Most competency frameworks treat soft skills and hard skills as the same thing, then wonder why assessment results feel unreliable. The two types need different anchors, different rater groups, and different development paths. This guide explains the structural split and how to get the ratio right.

By Simon CarviPublished April 20267 min read

On this page

What the distinction actually means in a framework

In a competency framework, every competency is either behavioural or technical. Behavioural competencies (soft skills) describe how a person works: communication, collaboration, conflict resolution, coaching, decision-making under ambiguity. Technical competencies (hard skills) describe what a person can do: financial modelling, Python programming, regulatory compliance, clinical assessment.

The distinction matters because the two types are observed differently, developed differently, and assessed differently.

A hard skill is verifiable. Either someone can build a financial model that reconciles, or they cannot. A soft skill is contextual. Someone might communicate clearly in a team meeting and poorly in a cross-functional escalation. The assessment design has to account for this difference, or it produces scores that look precise but mean nothing.

The 60/40 ratio and why it exists

Across the 313 pre-built competencies on Huneety, the default ratio is approximately 60% behavioural to 40% technical. This is not an arbitrary split. It reflects the observation that for most knowledge-work roles above entry level, what differentiates a Level 3 performer from a Level 5 is not whether they know more (they usually do), but how they apply what they know in context.

  • 1Technical-heavy roles (engineering, clinical, legal) Shift the ratio closer to 50/50 or 40/60 behavioural/technical. These roles have hard prerequisites: if you cannot write production-grade code, behavioural competency does not compensate. But even here, the behavioural layer matters at senior levels where collaboration and mentoring define impact.
  • 2Management and leadership roles Shift the ratio to 70/30 or 80/20 behavioural/technical. A VP of Engineering who cannot code but can build and retain a team is more valuable than one who can code but cannot retain anyone. The higher the role, the more the behavioural layer dominates.
  • 3The 60/40 default For mid-career individual contributors and first-line managers, 60/40 covers the ground. 8 to 12 competencies per role with 5 to 7 behavioural and 3 to 5 technical is a reliable starting point. Adjust per function after the first assessment cycle based on where the signal is.

One mistake teams make is treating the ratio as a global rule rather than a per-role decision. A data scientist and a sales director at the same company should not have the same ratio. The framework should reflect the reality of each role, not a corporate-level policy.

How to assess soft skills and hard skills accurately

The assessment method for each type should match how the competency is observed.

  • 1Behavioural competencies: multi-rater 360 Soft skills are context-dependent and rater-dependent. A manager sees upward behaviour; peers see horizontal behaviour; direct reports see downward behaviour. You need all three perspectives to get an accurate picture. A single-rater assessment of a behavioural competency is unreliable by design.
  • 2Technical competencies: manager + self + evidence For most technical skills, the manager and the individual are the most credible raters. Peers may not have the domain knowledge to evaluate. Where possible, supplement the rating with evidence: code reviews, audit results, certifications, project deliverables. The Dreyfus scale still applies, but the rater group is narrower.
  • 3Mixed competencies: decompose first "Project management" is both behavioural (stakeholder communication, conflict resolution) and technical (scheduling, risk modelling, scope definition). If you assess it as one competency, the score averages two signals. Decompose it into its behavioural and technical components so the gap analysis tells you which part needs development.

On Huneety, 360 assessments tag each competency as behavioural or technical at the framework level. The report separates the two types in the gap analysis so development priorities are not muddled by averaging a soft-skill gap with a hard-skill gap.

Tag every competency as behavioural or technical

Huneety separates soft and hard skills at the framework level. Assessment reports, gap analysis, and IDPs all respect the distinction.

Book a demo

Different gaps, different development paths

The 70/20/10 development model applies to both types, but the weight shifts.

  • 1Closing a behavioural gap The 70% (on-the-job) carries most of the weight. A leadership communication gap does not close in a workshop. It closes when the person leads a difficult cross-functional meeting (70%), gets coached by someone who does it well (20%), and reads a framework for structuring executive-level updates (10%). Timeline: 2 to 4 quarters.
  • 2Closing a technical gap The 10% (formal training) carries more weight than usual. A Python gap closes faster with a structured course (10%) plus a real project that requires the new skill (70%) plus code review from an expert (20%). Timeline: 1 to 2 quarters. Technical gaps are generally faster to close because the feedback loop is tighter: the code works or it does not.
  • 3The IDP separates the two A single IDP that mixes behavioural and technical actions without distinguishing them produces a confusing plan. The manager cannot prioritise because the two types develop on different timelines with different interventions. Structure the IDP so each priority gap is clearly tagged as behavioural or technical, with actions matched to the type.

The IDP guide covers how to structure 70/20/10 actions for different gap types in detail.

Soft and hard skills on Huneety

Every competency in a Huneety framework is tagged as behavioural or technical at the point of creation. The competency mapping module enforces this tagging so the downstream assessment, gap analysis, and IDP all respect the distinction. Reports show separate spider charts for the two types, and the gap analysis ranks behavioural and technical priorities independently. If you import an existing framework that does not have the tagging, Huna AI can classify the competencies automatically and you review the result.

FAQ

Quick answers

What is the right ratio of soft skills to hard skills?
60% behavioural to 40% technical is the default for most mid-career roles. Technical-heavy roles (engineering, clinical) shift to 50/50 or 40/60. Management roles shift to 70/30 or higher. The ratio should be set per role, not per company.
Can you assess soft skills with a self-evaluation only?
Self-evaluations of behavioural competencies overstate by roughly half a proficiency level on average. Soft skills require multi-rater data (manager, peers, direct reports) to produce reliable scores because the behaviour is observed differently from each vantage point.
Do soft skills take longer to develop than hard skills?
Generally yes. Behavioural gaps typically take 2 to 4 quarters to close because the development is experiential: the person needs repeated exposure to new situations. Technical gaps often close in 1 to 2 quarters because formal training is more effective and the feedback loop (code works or it does not) is tighter.
Should I use the same proficiency scale for both types?
Yes. The Dreyfus 0 to 5 scale applies to both. What changes is the behavioural anchor, not the scale. A Level 3 in "stakeholder communication" describes a different behaviour than a Level 3 in "Python," but both mean the person is competent and independent in that domain.

Build a framework that treats soft and hard skills differently

Huneety tags competencies as behavioural or technical from the start. Assessments, gap analysis, and IDPs all respect the distinction so your development priorities are clear.