Skip to content

360 Assessment

360 vs performance review: when to use each

S
Simon Carvi
·April 28, 2026·7 min read
Two hub-and-spoke diagrams contrasting 180 single-rater performance review against 360 multi-rater competency review

A performance review is a manager-led evaluation of past results that drives compensation and promotion decisions. A 360 assessment is a multi-rater evaluation of competencies that drives development plans and readiness signals. Use a performance review for the bonus letter. Use a 360 for the IDP. Run both, on different cycles, for different decisions.

The 30-second answer

The two assessments serve different decisions, use different rater pools, and follow different rules.

A performance review measures what someone produced against goals, KPIs, and deliverables. The manager rates. The score drives compensation, bonus, and promotion decisions. It runs annually or quarterly as part of the performance cycle.

A 360 assessment (or competency review) measures how developed someone is across observable behaviours. Self, manager, peers, and direct reports rate. The output drives development plans, succession suitability, and leadership readiness. It runs once per development cycle, usually yearly.

Treating them as substitutes breaks both.

What performance reviews actually do

Performance reviews evaluate output. Did the person hit their goals, deliver their KPIs, ship the projects assigned. The rater is the line manager. The format is 180-degree (manager only, sometimes with self-rating included). The scale rates outcomes against expectations.

A performance review drives the comp letter. That is its job.

That makes it a high-stakes moment. Raters protect their teams' bonus pool. Ratings cluster around 'Meets' to keep the budget line. Conversations about gaps get softened to avoid hurting comp outcomes.

This is not a flaw. It is the design constraint. A performance review should drive comp decisions. The output goes into a comp letter. That is the point.

Decisions a performance review supports:

  • Performance ratings and bonus eligibility
  • Salary increase and promotion cases
  • Performance improvement plans

What 360 reviews actually do

A 360 evaluates how someone works rather than what they produced. The competencies under review are observable behaviours: how they make decisions, how they communicate, how they develop their team, how they handle ambiguity.

Multiple raters score the same behaviours from different vantage points. Self. Manager. Peers. Direct reports. Sometimes clients or partners. The ratings combine into a competency profile that surfaces strengths, blind spots, and gaps against the role's target profile.

The output is not a score for a comp letter. It is a gap report. The gap report drives:

  • An individual development plan (IDP) using the 70/20/10 framework
  • A readiness signal for the next role (succession planning)
  • A coaching conversation, not a salary conversation

This is the structural reason 360 reviews surface less bias than performance reviews, when communicated clearly to raters that the data does not affect compensation. Without that signal, raters revert to performance-review behaviour: cluster scores, protect the bonus, soften the gaps.

A 360 drives the IDP, not the comp letter. The methodology only works if that is true in practice.

How to pick raters for each

Rater selection is the single biggest factor in 360 data quality. It barely matters in a performance review.

For a performance review, the rater is the line manager. Sometimes a skip-level reviews the manager's ratings before the comp letter goes out. The selection question is trivial: who manages this person? Done.

For a 360, rater selection is the methodology. Get it wrong and the data invalidates before the assessment opens. Three rules that hold up across organisations:

  1. Minimum 7 raters per person. Below 5, the data is noise. Below 7, anonymity breaks (the rated person can guess who said what).
  2. Mix the rater types. A 360 with 6 peers and 1 manager is not a 360. Aim for self, manager, at least 2 peers, 2 reports (or cross-functional collaborators if no reports), and ideally one external partner or client.
  3. Exclude conflict of interest. A peer competing with the rated person for promotion cannot rate cleanly. Neither can a report whose own review last cycle went badly. Pre-screen the rater list.

The rated person nominates raters, the manager approves, HR reviews the final list. That three-step gate prevents two failure modes: cherry-picking friendly raters (rated person), and stacking the deck against someone (manager). Rater selection takes longer than the actual assessment. That ratio is correct. The data quality is downstream of who gets to rate.

Side-by-side: nine dimensions

The same person can score 'Outstanding' on a performance review and 'Competent' on a Dreyfus scale in the same year. Both are correct, because they answer different questions. The decision frame below maps which tool to use when.

Decision frame

Performance review (180)

  • Purpose: evaluate past results
  • Decision: compensation, bonus, promotion
  • Raters: manager only (180-degree)
  • Anonymity: not anonymous
  • Scale: Performance 1 to 5 (Needs Improvement to Outstanding)
  • Output: performance score, drives the comp letter

360 / competency review

  • Purpose: surface development gaps and readiness
  • Decision: development plans, succession suitability, leadership readiness
  • Raters: self, manager, peers, reports (multi-rater)
  • Anonymity: anonymous (raters protected)
  • Scale: Dreyfus 0 to 5 (Novice to Expert, with N/A)
  • Output: gap report, drives the IDP and not the comp letter

Why the scales aren't comparable

A '3' on a performance review and a '3' on a Dreyfus scale do not mean the same thing.

A performance review rates whether the person did the job to the level expected. The 3 is 'Meets'. They delivered what was asked. It is a binary judgment dressed up as a scale.

A Dreyfus rating measures how developed someone's competency is. The 3 is 'Competent'. They plan their own work, distinguish what matters, and accept responsibility for the outcome. It is a stage of expertise, not a binary verdict.

Same number. Different meaning. Two examples:

  • 'You are at Level 3 (Meets)' closes the conversation. The person performed.
  • 'You are at Level 3 (Competent), and the role profile expects Level 4 (Proficient) for next year' opens the conversation. There is a gap, and a development plan can close it.

The asymmetry is structural too. Dreyfus extends to a Level 0 ('not exposed' or 'outside scope') because development assumes growth from zero. Performance scales start at 1. The assumption is the person is at least doing the job.

How the scales actually compare

Performance scale (1 to 5)

Used in performance reviews. Drives compensation.

  1. Needs Improvement
  2. Developing
  3. Meets
  4. Exceeds
  5. Outstanding

Dreyfus scale (0 to 5)

Used in 360 / competency reviews. Drives development plans.

  1. Not exposed
  2. Novice
  3. Advanced beginner
  4. Competent
  5. Proficient
  6. Expert

Same numbers, different meaning. 3 on the left is a verdict ('Meets'). 3 on the right is a stage of expertise ('Competent').

Deep dive

The Dreyfus proficiency scale, level by level

What each level looks like in practice, how to write behavioural anchors, and the three mistakes that break the scale.

Read the guide

The decision: which to use when

The choice depends on what decision sits at the end of the cycle.

For compensation, bonus, and promotion ratings

Run a performance review. The methodology is built for it. The rater is accountable to the comp budget. The cycle aligns with the company's financial calendar.

For development plans and learning investment

Run a 360 / competency review. The methodology surfaces gaps the manager alone cannot see. Multi-rater data corrects for one-manager bias. The output feeds the 70/20/10 development plan, not the comp letter.

For succession planning and leadership readiness

Run a 360 / competency review. The data tells you whether a high-potential employee is ready for the next role, not whether they hit last year's goals. Performance is a lagging indicator. Competency readiness is a leading indicator.

For PIPs and underperformance management

Stay in performance-review territory. A 360 is not the right tool to manage someone out. The anonymity that protects honest feedback in development is the same anonymity that breaks under HR-process scrutiny.

The bias dynamic

Competency reviews surface less bias than performance reviews, when raters know the data does not affect compensation. That single communication choice determines whether you get honest feedback or sanitised scores. When raters believe the 360 result will land in a comp file, they rate as if it is a performance review. The methodology fails before the assessment opens.

The mistake: treating a 360 like a perf review

When organisations conflate the two, three things break in sequence:

  1. Raters revert to perf-review behaviour. Peers worry their honest feedback will be used against the colleague. They cluster. The data flattens.
  2. Self-ratings inflate. When the rated person knows the result drives comp, they rate themselves higher to set a stronger negotiating position.
  3. Managers stop trusting the data. Once the scores look sanitised, the gap report becomes a political document, not a development input. Leadership stops investing in the program.

The fix is structural, not communicational. If your HR system carries the 360 result into the comp module, raters notice. The signal that 'this won't affect your bonus' only works if it is operationally true.

How they work together

Most organisations need both, on different cycles, for different decisions.

The two cycles do not run at the same time, and they do not feed the same decisions. The performance review produces a comp outcome. The 360 produces a development outcome. The IDP closes the gap surfaced by the 360. Next year's performance review measures whether the person delivered against new goals. The cycle restarts.

How performance review and 360 coexist in one annual cycle
  1. Q1: Performance review (180)

    Manager-led. Output: comp and promotion decisions.

  2. Q2: Competency review (360)

    Multi-rater. Output: gap report and readiness signal.

  3. Q3: IDP build

    70/20/10 development plan on the prioritised gaps.

  4. Q4: Progress check

    Mid-cycle review and planning for next year's window.

Reading both reports together

The combined view is what makes the cycle work. Each report alone tells half the story.

Once both cycles run, every employee has two data points: a performance rating from the comp cycle, and a competency profile from the 360. The manager's job is to read them together. Four scenarios, four different actions:

  1. High performance + high competency. The person delivered, and has the developed behaviours to keep delivering as scope grows. Succession candidate. Accelerate the IDP into stretch roles. This is your bench.
  2. High performance + low competency. Delivers in the current role but on narrow strengths or autopilot. The risk surfaces when scope changes: promotion, re-org, new market. Build the IDP around the missing competencies before the next stretch assignment, not after.
  3. Low performance + high competency. Capable, but blocked. The competency profile is fine. The performance miss is environmental: wrong manager, wrong role, blocked by an upstream team, personal circumstance. Fix the environment, not the person. A development plan will not fix what is not a development gap.
  4. Low performance + low competency. Mismatch. Investigate role fit, role design, or coaching needs before defaulting to PIP. If the person was promoted recently, treat as onboarding. If they have been in role over a year, the performance-review side of the cycle takes over.

The 360 alone cannot tell you scenario 3. The performance review alone cannot tell you scenario 2. Run both, read both.

How Huneety handles this

Performance management belongs in your HRIS. Development belongs in Huneety.

Huneety is built for the competency review side of the cycle. The 360 product runs multi-rater assessments against your competency framework, on the Dreyfus 0 to 5 scale (or any custom scale you define), with raters anonymised by default. The output is a gap report linked directly to a 70/20/10 IDP. The data does not exit the development module. It does not flow into compensation systems.

This is a deliberate boundary. The two integrate at the report level (a manager looking at one person sees both pictures), but the assessment data stays in its own lane.

See it in action

Run a 360 in Huneety

Multi-rater 360s, custom scales, anonymous by default, gap reports that drive IDPs and not comp letters.

Explore the platform

Full guide

360-degree assessment: the complete guide

Methodology, raters, reports, variants, and the five mistakes that kill 360 programs.

Read the guide

Common questions

No. A 360 measures developed competencies through multi-rater feedback. A performance review measures past output through manager evaluation. Different methodology, different decisions, different scales.
No. The two answer different questions. A 360 cannot drive compensation decisions because the anonymity required for honest peer feedback breaks under comp-related scrutiny. A performance review cannot surface development gaps because one manager cannot see all the angles a person operates from.
No. The moment raters believe their feedback influences compensation, they sanitise their scores. Use 360 results for development plans and readiness signals only.
Dreyfus measures developed expertise. Performance scales measure delivery against expectations. They are different constructs. Using the same scale for both invites raters to conflate 'Meets' with 'Competent', which they are not.
A 180 assessment uses the manager only, sometimes with self-rating included. A 360 adds peers and reports. Performance reviews are typically 180. Competency reviews are typically 360. The choice depends on what decision the data feeds.
Performance reviews follow the company's annual or quarterly comp cycle. 360 reviews typically run once per development cycle, often once a year, timed independently from the perf cycle so raters do not conflate the two.
360 Assessment

Ready to close the gaps?

Book a demo. We'll show you how it works with your competency framework.