Skip to content
360 ASSESSMENT

360-degree assessment: the complete guide

A 360 assessment is a multi-rater evaluation where an employee receives feedback from self, manager, peers, direct reports, and sometimes external stakeholders, all rating the same competencies. Done well, it surfaces gaps against role requirements and seeds a real development plan. Done badly, it becomes a survey nobody acts on. This guide covers the methodology, the five rater perspectives, the report structure, common failure modes, and how Huneety runs the cycle end-to-end.

By Simon CarviPublished April 202614 min read

On this page

THE BASICS

What is a 360-degree assessment?

A 360-degree assessment is a structured multi-rater evaluation: the person being assessed rates themselves on the same competencies that four other rater groups will rate them on. The raters are the assessee's direct manager, peers, direct reports, and optionally external stakeholders like customers or cross-functional partners. The output is a single report that triangulates self-perception against everyone else's observation, flags blind spots, and measures each competency against the target level the role requires.

A 360 is a development tool, not a performance review. It is used to surface gaps a manager or an annual review would miss, to set development priorities for the next 6 to 12 months, and to calibrate readiness for promotion. When a 360 is run as part of performance review or compensation, the data gets contaminated: raters sanitise their answers, self-ratings inflate, and the program loses the trust it needs to work the following year.

  • 1Five rater groups Self, manager, peers, direct reports, external. Not all five are used every time. A 180 is self plus manager. A manager-only assessment skips peers. The right mix depends on the role and the development goal.
  • 2Same competencies across all raters Every rater rates the same 6 to 12 competencies on the same scale. Different questions per rater group defeats the point of triangulation.
  • 3Measured against a target Every competency has a target level from the role profile. The report shows current versus target, not just an absolute score. A 3 out of 5 is good news for a level-3 role and a problem for a level-4 role.
  • 4Anonymised by rater group, not by rater The report shows averages per group ('Your manager rated you 3.4') not per person. This is what makes raters honest. Minimum 3 raters per group before the group average shows, below that the group merges with peers.

Example: HR Manager competency profile. Purple = role target. Green = current assessment. The gap between the two is where the IDP focuses.

WHY IT MATTERS

Why a 360 beats single-source feedback

Single-source feedback (a manager's annual review, a peer survey, a self-rating) is incomplete by construction. A manager sees the employee in upward-facing meetings and strategic conversations. Peers see the employee in cross-team collaboration. Direct reports see the employee as a boss. Each perspective misses what the others notice. A 360 triangulates them.

More importantly, a 360 produces the data that the individual development plan actually needs: a measured gap between current capability and role target, priority-ranked, with behaviour-level evidence from multiple observers. Without a 360, an IDP starts from manager opinion and stalls there. With a 360, the IDP starts from data the employee cannot argue with and a manager cannot rewrite to suit their preferred narrative.

  • 1Blind spot detection Self-ratings inflate by roughly half a level on average. The delta between self and others is where real development conversations start. The report highlights over-estimation and under-estimation explicitly.
  • 2Defensible development priorities When five groups converge on the same gap, the priority is non-negotiable. When they diverge, the conversation has to happen. Either way, the data is doing work.
  • 3Leadership pipeline calibration Used across a management layer, 360 data surfaces which competencies the organisation is weakest on, not just which individuals need development. It becomes an L&D and promotion input.
  • 4Honest feedback at scale Peer and direct-report feedback is notoriously hard to collect in 1:1s. The structured, anonymised 360 format gets it out, once a year, from the people who see the most.

THE PROCESS

How to run a 360 assessment in 6 steps

A 360 that delivers value takes 3 weeks from kickoff to manager-validated reports, for a cohort of 5 to 50 assessees. The bottleneck is not software, it is process: framework readiness, rater selection, and the debrief cadence. Skip any step and the program stalls.

WHAT THE REPORT LOOKS LIKE

Inside a 360 assessment report

The 360 report is where the data becomes a conversation. A Huneety report is 13 pages of structured analysis, dynamically adapted based on the assessment type (self-only, 180, full 360). The radar chart on the competency profile page is the single most-referenced visual in the debrief: it shows the current competency profile as a polygon, the role target as a second polygon, and the career target as a third. Gaps become visible in a glance.

Beyond the radar, the report includes a SWOT grid (top 3 strengths, top 3 improvements, blind spots from self-over and self-under estimation), a perception-gap matrix showing score deltas by rater group, a per-competency breakdown with qualitative feedback grouped by rater group, and an AI-generated executive summary drafted by Huna from the data. No vendor language, no benchmark against other companies, just the gap against the role the assessee holds and the role they are growing into.

Example: HR Manager competency profile. Purple = role target. Green = current assessment. The gap between the two is where the IDP focuses.

APPROACHES

Variants of the 360 assessment

Not every development question needs a full 360. Four variants cover most scenarios, and Huneety runs all four on the same framework and report template. Pick the variant that matches the objective and the role.

Full 360: five rater groups

Self + manager + peers + direct reports + external. The most complete picture, used for leadership development, succession readiness, and cross-functional role transitions. Requires at least 3 peers and 3 direct reports (for managers) to preserve group anonymity.

Time: 10 to 30 minutes per rater. Total elapsed: 1 to 2 weeks from launch to last submission. Best for senior individual contributors and management-layer development programs.

EXAMPLES

What a 360 produces by role

The output of a 360 is an IDP. Here is what the plan looks like for four roles after the assessment closes. Same framework, same 70/20/10 structure. Different role profiles, different gaps, different priorities.

Individual Development Plan

Priya Mehta · Marketing Manager

In Progress
On the job(70%)

Lead the Q3 brand reposition launch as primary stakeholder owner

Sep 30

Present quarterly marketing results to the C-suite (own the deck)

Oct 15
Through others(20%)

Bi-weekly 1:1 coaching with the VP Marketing

ongoingHelen R.
Training(10%)

Communicating with Executives (1-day workshop)

Aug 20

WHAT TO AVOID

Five mistakes that kill 360 programs

Year-one failure modes repeat across every organisation we work with. Screen for these before launch and the program stabilises by year two.

  1. Using 360 data in performance review

    The moment raters suspect their answers feed into compensation, responses sanitise. Keep 360 strictly developmental. If managers need performance data, run a separate process on a separate cycle with different vocabulary.
  2. Rater fatigue from a 45-minute survey

    Every added question is a response rate tax. Keep to 6 to 12 competencies, 10 to 30 minutes total. Open-text comments on 2 or 3 highest-leverage items, not every competency.
  3. Weak rater selection

    Raters the assessee hand-picks with no manager review produce friendly-fire data. Raters the HR team assigns in a vacuum produce low-context data. The best process: assessee proposes, manager approves, HR checks for balance.
  4. No manager debrief

    Handing the assessee a PDF cold is the single fastest way to kill trust in the program. The manager reads the report first, prepares, and debriefs in 45 to 60 minutes. If managers cannot or will not, the program is not ready.
  5. No follow-up IDP

    A 360 report without a 70/20/10 plan within 14 days becomes a file nobody opens. The development plan is where the program pays back. If the organisation is not ready to run IDPs, delay the 360 until it is.

HUNEETY PLATFORM

How Huneety runs 360 assessments

Most 360 programs stall on logistics: framework setup, rater invitations, reminder chasing, report generation, debrief scheduling. Huneety automates the logistics so HR spends its time on the conversation that produces development, not on the spreadsheet that runs the campaign.

360 Assessment Results

Marketing Manager

  • Strategic Thinking

    L2L4
    −2.0
    P1
  • Communication

    L3L4
    −1.0
    P2
  • Data Analysis

    L4L4
  • Decision Making

    L2L3
    −1.0
    P3

Auto-generated

Top 3 gaps identified · IDP auto-generated

  • Your competencies, your scales

    Run on your framework, not a vendor taxonomy. Dreyfus 1 to 5, Culture Fit (Resistor to Role Model), Performance (Needs Improvement to Outstanding), or custom labels per workspace. Same platform, right tool for the objective.

  • Token-based rater access, no logins

    Raters click a secure link, rate in 10 to 30 minutes, submit. No password, no account, no IT ticket. Token expires after 7 days. Response rates climb to the 70 to 85 percent range.

  • AI-generated report with IDP draft

    13-page branded PDF per assessee: radar, gap vs role, perception matrix, SWOT with blind spots, executive summary drafted by Huna, and a 70/20/10 IDP proposal for the manager to edit. Generated in minutes, not weeks.

  • Group campaigns, one launch

    Launch a 360 for a cohort of 5 to 500 in a single campaign. Shared role profile or per-talent profiles, HR sees one dashboard for the whole group, each assessee gets their own report and IDP.

WHO USES IT

Teams that rely on 360 assessments

360 assessments are used hardest by two audiences with different cadences: in-house HR teams running annual development programs, and leadership or HR consultants running shorter fixed-scope engagements.

In-house HR teams

HR teams embed 360 into the annual talent cycle: one cycle for high-potentials, one for the management layer, one for leaders heading into a new role. The framework is the standing artefact, the cycles are the operating rhythm, and the data feeds performance reviews, IDPs, succession conversations, and promotion calibration.

For HR teams

HR and leadership consultants

Consultants run 360 as a fixed-scope engagement: 2 to 3 weeks, a defined cohort, branded reports delivered at the end. With a white-label platform, the methodology is the consultant's, the delivery is the consultant's, and the client workspace can hand over at project close.

For consultants

GO DEEPER

360 assessment subtopics

Four deeper reads on specific angles of running a 360. Start with the report guide if you have never seen a 360 output, or the rater-group subtopics if you are designing the campaign.

FREQUENTLY ASKED

Common questions about 360 assessments

The questions that come up most in the first HR-team kickoff.

The minimum for group anonymity is 3 per group. A workable full 360 is 1 self, 1 manager, 3 to 6 peers, 3 to 6 direct reports if the assessee is a manager, and 2 to 3 external if relevant. Below the 3-per-group minimum, Huneety merges the group with peers so no individual rater can be identified.
No. The moment raters suspect their answers feed compensation or promotion, the responses sanitise and the data becomes useless for development. Keep 360 strictly developmental. If performance data is needed, run a separate process on a separate cycle with different vocabulary.
Three weeks from kickoff to manager-validated reports for a cohort of 5 to 50 assessees. Framework setup (or import) plus campaign configuration takes a week. Rater submission takes 1 to 2 weeks. Report generation is automatic once submissions close. Debriefs happen the following week.
6 to 12 per role, tied to the role profile, written as observable behaviours with 5 proficiency levels. The same set every rater sees. Different rater groups with different question sets defeats the triangulation that makes 360 valuable.
A 180 is self plus one other rater group (typically manager). A 360 is self plus manager plus peers plus direct reports plus optional external. The 180 is cheaper to run and lower-signal, useful as a first cycle or a mid-year check between full 360s.
Ratings aggregate by rater group. Group averages show only when there are at least 3 raters in the group. Below 3, the group merges with peers. Written comments are paraphrased before they reach the report. Token-based access means no rater identity is stored beyond the invitation record.
Yes, and this is the common case for HR teams. One campaign can invite multiple assessees, each with their own raters. Assessees can share a role profile for cohort-level comparison or use their own for organisation-wide programs. HR sees one progress dashboard, each assessee gets their own report and IDP.
No. External benchmarks compare roles that are not actually the same across companies: different scope, different vocabulary, different expectations. Huneety benchmarks every assessee against your role targets. That is the standard that matters for development decisions, and the one HR and managers can act on.

Related terms

The 360 Assessment glossary lands with our upcoming term library. Until then, the full vocabulary lives inside the platform itself.

Run your first 360 cycle on Huneety in 3 weeks

Framework setup, campaign launch, rater submissions, AI-generated reports, and manager-validated IDPs in one workspace. We handle the logistics so HR can focus on the debrief conversation.