Most organizations assess their people from one angle: the manager’s. That single viewpoint misses how someone communicates with peers, leads direct reports, or handles client relationships. A what is 360 degree assessment question comes up when HR teams realize one perspective is not enough to make fair development decisions.
This guide covers the methodology behind 360 assessments, the five rater groups, when to use each assessment variant, and the critical difference between skills assessment and performance evaluation.
What is a 360-degree assessment? A 360-degree assessment collects competency ratings from five groups: the individual themselves, their manager, peers, direct reports, and external contacts. The multi-rater structure reduces single-source bias and gives HR teams a rounded view of each person’s strengths, blind spots, and development needs. Read the full 360-degree assessment guide.
Why five rater groups matter
A 360 assessment draws on five distinct perspectives. Each group sees different behaviors.
Self-ratings capture how someone perceives their own competency levels. Self-ratings tend to inflate by about half a Dreyfus point on average. That gap between self-perception and others’ perception is one of the most valuable data points in the entire report.
The manager rates role-critical competencies and strategic alignment. Manager scores carry weight for promotion and succession conversations because the manager sets expectations for the role.
Peers observe day-to-day collaboration, communication under pressure, and knowledge sharing. Peer data is often the highest-signal group in a 360 because peers see behaviors that managers and direct reports do not.
Direct reports rate leadership behaviors: delegation, coaching, feedback quality, and psychological safety. This perspective is invisible to the manager group and often surfaces blind spots in leadership style.
External raters (clients, vendors, cross-functional partners) contribute context that no internal group can provide. External ratings are optional but valuable for client-facing roles.
Manager view
- Strategic alignment
- Goal delivery
- Role-critical competency
Peer + direct report view
- Daily collaboration
- Communication under pressure
- Coaching and delegation style
Each rater completes the assessment in 10 to 20 minutes. Responses are anonymous: individual rater names are never shown, and group data only appears when at least three raters per group have responded.
Skills assessment vs performance assessment
This distinction matters more than most HR teams realize. Mixing the two corrupts both.
A skills assessment (competency assessment) measures current capability against a defined proficiency target. It uses a developmental scale like the Dreyfus model (0 to 5) and answers the question: “Where is this person relative to where they need to be?” Skills assessments have no impact on bonus, salary, or promotion decisions. They feed development plans.
A performance assessment measures output against goals. It answers: “Did this person deliver what was expected?” Performance assessments directly affect compensation, bonus, and career progression.
The rule is simple: keep skills assessment and performance assessment in separate processes. Skills data goes to development plans. Performance data goes to compensation reviews. When both live in the same instrument, neither produces accurate results.
Skills assessment
Competency ratings (Dreyfus 0-5) collected from 5 rater groups
Gap analysis
Current score vs target proficiency per competency
Development plan
70/20/10 actions assigned to close each gap
Re-assessment
Next cycle measures progress
Performance review
Output vs goals, rated by manager
Calibration
Cross-team calibration meeting
Compensation decision
Bonus and salary adjustments
Huneety runs skills assessments on the Dreyfus scale. Performance ratings use a separate scale (Needs Improvement to Outstanding). The platform supports both, but they feed different workflows by design.
360 vs 180 vs self-only
Not every assessment needs five rater groups. The right variant depends on the purpose, the population, and the timeline.
Full 360 (all five groups) works best for leadership development, succession planning, and coaching programs where perception gaps matter. It takes 3 weeks from kickoff to reports and requires careful rater selection.
180-degree assessment (self + manager only, or self + peers only) suits situations where you need a faster cycle or where the assessee has no direct reports. Common for individual contributors and early-career professionals.
Self-only assessment is useful as a baseline before a more complete assessment, or for large-scale skills inventories where you need data fast. Self-only data has known inflation bias, so treat it as directional rather than definitive.
Complete guide
360-degree assessment: the complete guide
The complete guide covers the 6-step process, report structure, and the 5 mistakes that kill programs.
Read the guide →
What a 360 assessment measures
A 360 assessment measures competencies, not personality traits and not job performance. Each competency is defined as a set of observable behaviors rated on the Dreyfus scale from 0 (no experience) to 5 (expert).
Each role is assessed on up to its cap: 8 competencies for ICs, 10 for managers, 12 for directors, with at least one third behavioral. The platform draws from a library of 1,700+ pre-built skills across 300+ competencies. Organizations can also import their own framework or have Huna AI generate one from job descriptions.
Each rater rates each competency. The platform aggregates scores by rater group, calculates the gap between current proficiency and target proficiency, and flags perception gaps where self-ratings diverge from others’ ratings.
The output is a 13-page structured report that includes a spider chart, gap analysis, SWOT quadrant, blind spot detection, and an AI-generated executive summary.
The anonymity rule
Rater anonymity protects data quality. If raters fear their individual scores will be traced back to them, they soften their ratings.
The standard anonymity rule: group-level data only appears when at least three raters in that group have responded. Below three, the data is suppressed entirely. Individual rater names are never shown in the report.
This matters in Southeast Asian contexts (Thailand, Indonesia, Malaysia) where cultural norms around hierarchy and face-saving can suppress honest feedback. Anonymity does not eliminate these dynamics, but it reduces them enough to produce usable data.
For the manager group (typically one person), the report shows the manager’s ratings openly since there is no way to anonymize a single rater. Assessees know this going in.
Built for HR teams
Run your next 360 assessment project
Multi-rater assessments with branded reports, automatic reminders, and development recommendations. Framework to reports in 3 weeks.
See how it works →
What happens after the assessment
The assessment itself is a data collection exercise. The value comes from what happens next.
Each assessee receives their report. HR or a coach debriefs the results in a 60 to 90 minute structured session. The debrief focuses on perception gaps (where self-ratings diverge from others), the SWOT quadrant (strengths, improvements, blind spots, hidden strengths), and the top 3 development priorities.
Those priorities feed directly into an individual development plan built on the 70/20/10 framework: 70% on-the-job stretch assignments, 20% learning through others (mentoring, shadowing), and 10% formal learning (courses, certifications).
The next assessment cycle (typically 6 to 12 months later) measures whether the gaps closed. That before-and-after comparison is the most concrete evidence of development ROI an HR team can present.
Frequently asked questions
Running your first 360 assessment project, or scaling an existing program? Huneety works with HR teams running project-based or annual assessments, and with HR consultants who deliver assessments under their own brand.