Skip to content

360 Assessment

What is a 360 degree assessment?

H
Huneety Team
·April 17, 2026·5 min read
Five rater groups orbiting a center person in a 360 assessment diagram

Most organizations assess their people from one angle: the manager’s. That single viewpoint misses how someone communicates with peers, leads direct reports, or handles client relationships. A what is 360 degree assessment question comes up when HR teams realize one perspective is not enough to make fair development decisions.

This guide covers the methodology behind 360 assessments, the five rater groups, when to use each assessment variant, and the critical difference between skills assessment and performance evaluation.

What is a 360-degree assessment? A 360-degree assessment collects competency ratings from five groups: the individual themselves, their manager, peers, direct reports, and external contacts. The multi-rater structure reduces single-source bias and gives HR teams a rounded view of each person’s strengths, blind spots, and development needs. Read the full 360-degree assessment guide.

Why five rater groups matter

A 360 assessment draws on five distinct perspectives. Each group sees different behaviors.

Self-ratings capture how someone perceives their own competency levels. Self-ratings tend to inflate by about half a Dreyfus point on average. That gap between self-perception and others’ perception is one of the most valuable data points in the entire report.

The manager rates role-critical competencies and strategic alignment. Manager scores carry weight for promotion and succession conversations because the manager sets expectations for the role.

Peers observe day-to-day collaboration, communication under pressure, and knowledge sharing. Peer data is often the highest-signal group in a 360 because peers see behaviors that managers and direct reports do not.

Direct reports rate leadership behaviors: delegation, coaching, feedback quality, and psychological safety. This perspective is invisible to the manager group and often surfaces blind spots in leadership style.

External raters (clients, vendors, cross-functional partners) contribute context that no internal group can provide. External ratings are optional but valuable for client-facing roles.

Who sees what

Manager view

  • Strategic alignment
  • Goal delivery
  • Role-critical competency

Peer + direct report view

  • Daily collaboration
  • Communication under pressure
  • Coaching and delegation style

Each rater completes the assessment in 10 to 20 minutes. Responses are anonymous: individual rater names are never shown, and group data only appears when at least three raters per group have responded.

Skills assessment vs performance assessment

This distinction matters more than most HR teams realize. Mixing the two corrupts both.

A skills assessment (competency assessment) measures current capability against a defined proficiency target. It uses a developmental scale like the Dreyfus model (0 to 5) and answers the question: “Where is this person relative to where they need to be?” Skills assessments have no impact on bonus, salary, or promotion decisions. They feed development plans.

A performance assessment measures output against goals. It answers: “Did this person deliver what was expected?” Performance assessments directly affect compensation, bonus, and career progression.

The rule is simple: keep skills assessment and performance assessment in separate processes. Skills data goes to development plans. Performance data goes to compensation reviews. When both live in the same instrument, neither produces accurate results.

Two parallel tracks
  1. Skills assessment

    Competency ratings (Dreyfus 0-5) collected from 5 rater groups

  2. Gap analysis

    Current score vs target proficiency per competency

  3. Development plan

    70/20/10 actions assigned to close each gap

  4. Re-assessment

    Next cycle measures progress

  5. Performance review

    Output vs goals, rated by manager

  6. Calibration

    Cross-team calibration meeting

  7. Compensation decision

    Bonus and salary adjustments

Huneety runs skills assessments on the Dreyfus scale. Performance ratings use a separate scale (Needs Improvement to Outstanding). The platform supports both, but they feed different workflows by design.

360 vs 180 vs self-only

Not every assessment needs five rater groups. The right variant depends on the purpose, the population, and the timeline.

Full 360 (all five groups) works best for leadership development, succession planning, and coaching programs where perception gaps matter. It takes 3 weeks from kickoff to reports and requires careful rater selection.

180-degree assessment (self + manager only, or self + peers only) suits situations where you need a faster cycle or where the assessee has no direct reports. Common for individual contributors and early-career professionals.

Self-only assessment is useful as a baseline before a more complete assessment, or for large-scale skills inventories where you need data fast. Self-only data has known inflation bias, so treat it as directional rather than definitive.

Complete guide

360-degree assessment: the complete guide

The complete guide covers the 6-step process, report structure, and the 5 mistakes that kill programs.

Read the guide →

What a 360 assessment measures

A 360 assessment measures competencies, not personality traits and not job performance. Each competency is defined as a set of observable behaviors rated on the Dreyfus scale from 0 (no experience) to 5 (expert).

Each role is assessed on up to its cap: 8 competencies for ICs, 10 for managers, 12 for directors, with at least one third behavioral. The platform draws from a library of 1,700+ pre-built skills across 300+ competencies. Organizations can also import their own framework or have Huna AI generate one from job descriptions.

Each rater rates each competency. The platform aggregates scores by rater group, calculates the gap between current proficiency and target proficiency, and flags perception gaps where self-ratings diverge from others’ ratings.

The output is a 13-page structured report that includes a spider chart, gap analysis, SWOT quadrant, blind spot detection, and an AI-generated executive summary.

The anonymity rule

Rater anonymity protects data quality. If raters fear their individual scores will be traced back to them, they soften their ratings.

The standard anonymity rule: group-level data only appears when at least three raters in that group have responded. Below three, the data is suppressed entirely. Individual rater names are never shown in the report.

This matters in Southeast Asian contexts (Thailand, Indonesia, Malaysia) where cultural norms around hierarchy and face-saving can suppress honest feedback. Anonymity does not eliminate these dynamics, but it reduces them enough to produce usable data.

For the manager group (typically one person), the report shows the manager’s ratings openly since there is no way to anonymize a single rater. Assessees know this going in.

Built for HR teams

Run your next 360 assessment project

Multi-rater assessments with branded reports, automatic reminders, and development recommendations. Framework to reports in 3 weeks.

See how it works →

What happens after the assessment

The assessment itself is a data collection exercise. The value comes from what happens next.

Each assessee receives their report. HR or a coach debriefs the results in a 60 to 90 minute structured session. The debrief focuses on perception gaps (where self-ratings diverge from others), the SWOT quadrant (strengths, improvements, blind spots, hidden strengths), and the top 3 development priorities.

Those priorities feed directly into an individual development plan built on the 70/20/10 framework: 70% on-the-job stretch assignments, 20% learning through others (mentoring, shadowing), and 10% formal learning (courses, certifications).

The next assessment cycle (typically 6 to 12 months later) measures whether the gaps closed. That before-and-after comparison is the most concrete evidence of development ROI an HR team can present.

Frequently asked questions

A 360 assessment is used for development, not evaluation. It collects competency ratings from multiple rater groups to identify skills gaps and perception gaps. The data feeds individual development plans, coaching programs, and succession planning.
A minimum of 3 raters per group is required before group-level data appears in the report. A typical setup includes 1 manager, 3 to 5 peers, and 3 to 5 direct reports, plus the self-assessment. That is 10 to 14 raters per assessee.
Each rater spends 10 to 20 minutes completing the assessment. The full cycle from [project launch](/blog/launching-360-assessment-project) to report delivery takes approximately 3 weeks, including rater selection, data collection, and automated reminders at day 5 and day 10.
A 360 assessment collects ratings from all five groups (self, manager, peers, direct reports, external). A 180 assessment uses only two groups, typically self and manager. The 180 variant is faster but misses the peer and direct report perspectives that reveal blind spots.
It depends on the rating scale. When the 360 uses performance scales (e.g. "Needs Improvement" to "Outstanding"), it can supplement or replace a traditional performance review, and results may impact salary increases and bonuses. When the 360 uses competency scales (e.g. Dreyfus 1-5), it measures skill proficiency, not output. Competency and skills assessments have no impact on compensation. Their purpose is development: identifying gaps and creating IDPs to close them. Most organizations keep the two instruments separate to avoid raters adjusting scores when compensation is at stake.

Running your first 360 assessment project, or scaling an existing program? Huneety works with HR teams running project-based or annual assessments, and with HR consultants who deliver assessments under their own brand.

360 Assessment

Ready to close the gaps?

Book a demo. We'll show you how it works with your competency framework.