Skip to content

360 Assessment

5 pre-launch checks for your 360 assessment project

H
Huneety Team
·April 15, 2026·4 min read
Five-item pre-launch checklist for a 360 assessment project

A 360 assessment is one of the highest-trust exercises an organization can run. People give honest feedback, sometimes about their boss, with the expectation that the data will be handled responsibly and used for development. When that trust gets broken, by sloppy rater selection or a missed debrief, the program is harder to relaunch the second time than it would have been to launch right the first time. This 360 assessment checklist covers the five checks to run before the first invitation goes out.

What is a 360 assessment? A multi-rater feedback exercise where an employee receives input from people above, below, beside, and outside their reporting line, on a defined set of competencies. Used for development, performance, or training-program effectiveness, with very different stakes for each. Run on platforms like Huneety's 360 assessment platform with built-in anonymity, rater-group structure, and report templates.

Why pre-launch matters more than the questionnaire

Most 360 programs fail at the design stage, not the survey stage. By the time the questionnaire is in the field, the success or failure of the program is already mostly determined. The decisions that matter, the purpose, the rater groups, the anonymity guarantees, the debrief plan, all happen before anyone sees a question.

Below is the checklist we run with HR teams in the two weeks before a 360 launches. Five checks, in order. Skip any one of them and the data quality, response rate, or downstream development conversations suffer.

The 5 pre-launch checks
  1. Define the why

    Decide whether this is for development, performance, or program effectiveness. The choice changes everything downstream.

  2. Reassure on anonymity

    Set the anonymity rules (groups of 2+, line manager identifiable). Communicate them before the invitation.

  3. Select the right raters

    Above 1+, Below 2-3, Side 2-3, plus optional Others. Quality of observation, not friendship.

  4. Brief raters on traps

    Last-event bias, halo effect, over/under rating. Three minutes of guidance prevents weeks of bad data.

  5. Plan the debrief

    Sequence the conversation: HR + line manager first, then the employee. Connect to an IDP within two weeks.

Check 1: define the why

Before any other decision, name what this 360 is for. There are three legitimate purposes, and they have very different stakes.

The choice changes the brief to raters, the report template, the debrief sequence, and the consequences attached to the result. Confusing development with performance, or letting people assume one when you mean the other, is the single most common reason 360 programs lose trust.

Write the purpose in one sentence. Test it on three managers before the invitation goes out. If they read it three different ways, rewrite it.

Check 2: reassure on anonymity

Without anonymity, you don't get feedback. You get politics. Two practices keep the trust intact.

  • Only the line manager is identifiable in the final report (alongside HR, who oversees development). Direct reports and peers are reported in groups of two or more. If a rater group has fewer than two responders, their input is folded into a higher group rather than shown alone.
  • Communicate the rules before the invitation, not buried in a privacy notice. A two-sentence explanation in the kickoff email, repeated at the top of the questionnaire, sets the contract.

If your platform doesn't enforce these defaults automatically, change platforms before launching. Hand-rolled anonymity always leaks somewhere.

The full 360 picture

The complete guide to 360 assessments

Methodology, rater groups, report structure, four 360 variants, and the five mistakes that kill programs. The full framework behind the checklist above.

Read the guide

Check 3: select the right raters

The participant nominates the raters, with support from line manager and HR. The participant must know who is being asked, and be comfortable with the list. Surprise raters destroy trust faster than any other mistake.

Pick raters who are in the best position to observe the participant's behavior on a regular basis. They don't have to be friends. They have to have seen the work. A peer in another office who only sees the participant in quarterly meetings will provide thinner data than a colleague in adjacent seats every day.

Reach out personally. The participant should send a short note to each rater explaining the program, asking for honest feedback, and thanking them for the time. This single step lifts response rates from around 60% to north of 85% in our experience.

Check 4: brief raters on the rating traps

Three traps appear in nearly every 360 program. A two-minute briefing prevents them. They are worth naming explicitly.

Rating traps and what to do instead

Common rating traps

  • Last-event bias: rating from the most recent interaction only
  • Over- or under-rating: scores clustered at one end without evidence
  • Halo effect: one strong quality (e.g. likeability) bleeding into other ratings

What to do instead

  • Ask raters to recall multiple specific moments across 6-12 months
  • Require a one-line evidence note next to any rating at the extremes
  • Rate one competency at a time, with behavior anchors per scale point

In the rater guidelines, name these three traps explicitly. Then ask raters to base their feedback only on behaviors they have directly observed, and to skip questions where they have no observation rather than guess. "I haven't observed this" is a more useful answer than a 3 out of 5 with no evidence behind it.

Check 5: plan the debrief and the IDP handoff

The 360 report on its own changes nothing. The debrief is what makes it count. Two patterns work; one fails.

  • HR + line manager debrief first. The two of them go through the report before the employee sees it, agree on the tone, and prepare the development conversation. Surprises in front of the employee damage trust.
  • Line manager debriefs the employee. In a dedicated 1:1, never as part of a regular performance review. HR can be present if the report is sensitive or if the line manager is new to running these conversations.
  • IDP within two weeks. The output of the debrief is a development plan, not a filed report. If two weeks pass with no IDP, the program loses its credibility for the next cycle.

The IDP itself follows the 70-20-10 framework: stretch assignments to apply the strengths the 360 surfaced, coaching to address the gaps, and formal learning where it reinforces the other two. For leadership-focused 360s, see 360 feedback for leadership development for the leadership-specific patterns.

Built for HR teams

Run your next 360 on Huneety

Branded reports, automatic anonymity enforcement, rater-group structure, and one-click IDP handoff to the development plan. Replaces homegrown forms and Excel-based scoring.

See how it works

Frequently asked questions

From kickoff to debrief, four to six weeks for an organization of 100-500 participants. One week for setup and rater nomination, two weeks for the questionnaire to be open, one week for analysis, one to two weeks for debriefs. Compressing this into less than four weeks degrades response rates and rushes the debrief; stretching it past eight weeks causes drift and dropout.
20 to 40 questions covering 4 to 8 competencies. Beyond 40 questions, completion rates fall sharply, especially among the Above and Side rater groups who are doing this as a favor. Each competency should have 3 to 5 specific behavioral statements rather than one generic question, so raters can rate observed behaviors rather than vibes.
The participant, the line manager, and HR. By default, no one else. Skip-level managers may see aggregated summaries for their team without individual scores. Senior leaders should not see individual reports unless they are the line manager. Treating 360 data as broadly visible breaks the anonymity contract and burns the next cycle.
Possible but risky. Performance-tied 360s require much tighter calibration, more raters per participant (typically 6+ Below and Side), and very explicit anonymity guarantees. They also change rater behavior: people rate more cautiously when they know money is on the line. Most organizations use 360s for development first, then a stripped-down version for performance only after several stable cycles.
85% or higher across all rater groups. Above 90% is achievable when participants reach out personally to raters, when the questionnaire is short, and when the platform sends well-timed reminders. Response rates below 70% usually indicate either rater fatigue (too many surveys), poor communication of the why, or anonymity concerns.

Huneety helps HR teams launching 360 projects and consultants running 360 assessments with end-to-end platform support: rater logistics, branded reports, IDP handoff. Talk to our team about your next cycle.

360 Assessment

Ready to close the gaps?

Book a demo. We'll show you how it works with your competency framework.