On this page
- What is a 360-degree assessment?
- Why a 360 beats single-source feedback
- How to run a 360 assessment in 6 steps
- Inside a 360 assessment report
- Variants of the 360 assessment
- What a 360 produces by role
- Five mistakes that kill 360 programs
- How Huneety runs 360 assessments
- Teams that rely on 360 assessments
- Related guides
- 360 assessment subtopics
- Common questions about 360 assessments
THE BASICS
What is a 360-degree assessment?
A 360-degree assessment is a structured multi-rater evaluation: the person being assessed rates themselves on the same competencies that four other rater groups will rate them on. The raters are the assessee's direct manager, peers, direct reports, and optionally external stakeholders like customers or cross-functional partners. The output is a single report that triangulates self-perception against everyone else's observation, flags blind spots, and measures each competency against the target level the role requires.
A 360 is a development tool, not a performance review. It is used to surface gaps a manager or an annual review would miss, to set development priorities for the next 6 to 12 months, and to calibrate readiness for promotion. When a 360 is run as part of performance review or compensation, the data gets contaminated: raters sanitise their answers, self-ratings inflate, and the program loses the trust it needs to work the following year.
- 1Five rater groups Self, manager, peers, direct reports, external. Not all five are used every time. A 180 is self plus manager. A manager-only assessment skips peers. The right mix depends on the role and the development goal.
- 2Same competencies across all raters Every rater rates the same 6 to 12 competencies on the same scale. Different questions per rater group defeats the point of triangulation.
- 3Measured against a target Every competency has a target level from the role profile. The report shows current versus target, not just an absolute score. A 3 out of 5 is good news for a level-3 role and a problem for a level-4 role.
- 4Anonymised by rater group, not by rater The report shows averages per group ('Your manager rated you 3.4') not per person. This is what makes raters honest. Minimum 3 raters per group before the group average shows, below that the group merges with peers.
Example: HR Manager competency profile. Purple = role target. Green = current assessment. The gap between the two is where the IDP focuses.
WHY IT MATTERS
Why a 360 beats single-source feedback
Single-source feedback (a manager's annual review, a peer survey, a self-rating) is incomplete by construction. A manager sees the employee in upward-facing meetings and strategic conversations. Peers see the employee in cross-team collaboration. Direct reports see the employee as a boss. Each perspective misses what the others notice. A 360 triangulates them.
More importantly, a 360 produces the data that the individual development plan actually needs: a measured gap between current capability and role target, priority-ranked, with behaviour-level evidence from multiple observers. Without a 360, an IDP starts from manager opinion and stalls there. With a 360, the IDP starts from data the employee cannot argue with and a manager cannot rewrite to suit their preferred narrative.
- 1Blind spot detection Self-ratings inflate by roughly half a level on average. The delta between self and others is where real development conversations start. The report highlights over-estimation and under-estimation explicitly.
- 2Defensible development priorities When five groups converge on the same gap, the priority is non-negotiable. When they diverge, the conversation has to happen. Either way, the data is doing work.
- 3Leadership pipeline calibration Used across a management layer, 360 data surfaces which competencies the organisation is weakest on, not just which individuals need development. It becomes an L&D and promotion input.
- 4Honest feedback at scale Peer and direct-report feedback is notoriously hard to collect in 1:1s. The structured, anonymised 360 format gets it out, once a year, from the people who see the most.
THE PROCESS
How to run a 360 assessment in 6 steps
A 360 that delivers value takes 3 weeks from kickoff to manager-validated reports, for a cohort of 5 to 50 assessees. The bottleneck is not software, it is process: framework readiness, rater selection, and the debrief cadence. Skip any step and the program stalls.
Lock the competency framework and targets
Pick 6 to 12 competencies per role with behavioural anchors at 5 levels. Assign target levels to every competency per role. If the framework is not ready, the 360 measures against a moving yardstick and the data becomes unusable.
Pick the right assessees and raters
Assessees: high-potentials, new managers, leaders heading into a change, or everyone in a role family. Raters: 1 manager, 3 to 6 peers, 3 to 6 direct reports for managers, and optionally 2 to 3 external. Fewer than 3 per group merges into peers for anonymity.
Launch with clear purpose and anonymity
Every invitation must state the purpose (development, not performance), the anonymity model (group averages, not per-person), and the deadline. Ambiguity here is the single most common reason response rates fall below 60 percent.
Collect feedback in 10 to 30 minutes per rater
Rater fatigue is the killer. A 360 that takes 45 minutes gets half-finished. Keep the survey short, make every item competency-linked, and use mandatory written justification sparingly.
Generate the report and debrief with the manager first
The assessee should never receive the report cold. The manager reads it first, prepares the debrief conversation (strengths, gaps, next steps), then walks the assessee through it. 45 to 60 minutes per debrief. This is where the program earns its value.
Turn the report into an IDP within 2 weeks
The 360 report without an IDP is a file cabinet. Commit to a 70/20/10 development plan on the top 2 gaps within 14 days of the debrief. Quarterly review tracks progress. A second 360 re-assessment at 12 to 18 months closes the loop.
WHAT THE REPORT LOOKS LIKE
Inside a 360 assessment report
The 360 report is where the data becomes a conversation. A Huneety report is 13 pages of structured analysis, dynamically adapted based on the assessment type (self-only, 180, full 360). The radar chart on the competency profile page is the single most-referenced visual in the debrief: it shows the current competency profile as a polygon, the role target as a second polygon, and the career target as a third. Gaps become visible in a glance.
Beyond the radar, the report includes a SWOT grid (top 3 strengths, top 3 improvements, blind spots from self-over and self-under estimation), a perception-gap matrix showing score deltas by rater group, a per-competency breakdown with qualitative feedback grouped by rater group, and an AI-generated executive summary drafted by Huna from the data. No vendor language, no benchmark against other companies, just the gap against the role the assessee holds and the role they are growing into.
Example: HR Manager competency profile. Purple = role target. Green = current assessment. The gap between the two is where the IDP focuses.
APPROACHES
Variants of the 360 assessment
Not every development question needs a full 360. Four variants cover most scenarios, and Huneety runs all four on the same framework and report template. Pick the variant that matches the objective and the role.
Full 360: five rater groups
Self + manager + peers + direct reports + external. The most complete picture, used for leadership development, succession readiness, and cross-functional role transitions. Requires at least 3 peers and 3 direct reports (for managers) to preserve group anonymity.
Time: 10 to 30 minutes per rater. Total elapsed: 1 to 2 weeks from launch to last submission. Best for senior individual contributors and management-layer development programs.
EXAMPLES
What a 360 produces by role
The output of a 360 is an IDP. Here is what the plan looks like for four roles after the assessment closes. Same framework, same 70/20/10 structure. Different role profiles, different gaps, different priorities.
Individual Development Plan
Priya Mehta · Marketing Manager
Lead the Q3 brand reposition launch as primary stakeholder owner
Present quarterly marketing results to the C-suite (own the deck)
Bi-weekly 1:1 coaching with the VP Marketing
Communicating with Executives (1-day workshop)
WHAT TO AVOID
Five mistakes that kill 360 programs
Year-one failure modes repeat across every organisation we work with. Screen for these before launch and the program stabilises by year two.
Using 360 data in performance review
The moment raters suspect their answers feed into compensation, responses sanitise. Keep 360 strictly developmental. If managers need performance data, run a separate process on a separate cycle with different vocabulary.Rater fatigue from a 45-minute survey
Every added question is a response rate tax. Keep to 6 to 12 competencies, 10 to 30 minutes total. Open-text comments on 2 or 3 highest-leverage items, not every competency.Weak rater selection
Raters the assessee hand-picks with no manager review produce friendly-fire data. Raters the HR team assigns in a vacuum produce low-context data. The best process: assessee proposes, manager approves, HR checks for balance.No manager debrief
Handing the assessee a PDF cold is the single fastest way to kill trust in the program. The manager reads the report first, prepares, and debriefs in 45 to 60 minutes. If managers cannot or will not, the program is not ready.No follow-up IDP
A 360 report without a 70/20/10 plan within 14 days becomes a file nobody opens. The development plan is where the program pays back. If the organisation is not ready to run IDPs, delay the 360 until it is.
HUNEETY PLATFORM
How Huneety runs 360 assessments
Most 360 programs stall on logistics: framework setup, rater invitations, reminder chasing, report generation, debrief scheduling. Huneety automates the logistics so HR spends its time on the conversation that produces development, not on the spreadsheet that runs the campaign.
360 Assessment Results
Marketing Manager
Strategic Thinking
L2L4−2.0P1Communication
L3L4−1.0P2Data Analysis
L4L4✓—Decision Making
L2L3−1.0P3
Auto-generated
Top 3 gaps identified · IDP auto-generated
Your competencies, your scales
Run on your framework, not a vendor taxonomy. Dreyfus 1 to 5, Culture Fit (Resistor to Role Model), Performance (Needs Improvement to Outstanding), or custom labels per workspace. Same platform, right tool for the objective.
Token-based rater access, no logins
Raters click a secure link, rate in 10 to 30 minutes, submit. No password, no account, no IT ticket. Token expires after 7 days. Response rates climb to the 70 to 85 percent range.
AI-generated report with IDP draft
13-page branded PDF per assessee: radar, gap vs role, perception matrix, SWOT with blind spots, executive summary drafted by Huna, and a 70/20/10 IDP proposal for the manager to edit. Generated in minutes, not weeks.
Group campaigns, one launch
Launch a 360 for a cohort of 5 to 500 in a single campaign. Shared role profile or per-talent profiles, HR sees one dashboard for the whole group, each assessee gets their own report and IDP.
WHO USES IT
Teams that rely on 360 assessments
360 assessments are used hardest by two audiences with different cadences: in-house HR teams running annual development programs, and leadership or HR consultants running shorter fixed-scope engagements.
In-house HR teams
HR teams embed 360 into the annual talent cycle: one cycle for high-potentials, one for the management layer, one for leaders heading into a new role. The framework is the standing artefact, the cycles are the operating rhythm, and the data feeds performance reviews, IDPs, succession conversations, and promotion calibration.
HR and leadership consultants
Consultants run 360 as a fixed-scope engagement: 2 to 3 weeks, a defined cohort, branded reports delivered at the end. With a white-label platform, the methodology is the consultant's, the delivery is the consultant's, and the client workspace can hand over at project close.
GO DEEPER
360 assessment subtopics
Four deeper reads on specific angles of running a 360. Start with the report guide if you have never seen a 360 output, or the rater-group subtopics if you are designing the campaign.
360 self-evaluation guide
How self-ratings behave in a 360, why they overstate by half a level, and how to run a self-eval that produces useful data.
Peer feedback in 360 assessments
Why peer data is the highest-signal rater group, how to pick peer raters, and how calibration works across the peer set.
360 assessment report guide
What a Huneety 360 report contains page by page, how to read the radar, the SWOT, the blind spots, and the perception-gap matrix.
Culture fit assessment guide
How to run a culture-fit 360 using observable behaviours instead of personality inventories, and when it actually makes sense.
FREQUENTLY ASKED
Common questions about 360 assessments
The questions that come up most in the first HR-team kickoff.
Related terms
The 360 Assessment glossary lands with our upcoming term library. Until then, the full vocabulary lives inside the platform itself.