A 360 assessment is one of the highest-trust exercises an organization can run. People give honest feedback, sometimes about their boss, with the expectation that the data will be handled responsibly and used for development. When that trust gets broken, by sloppy rater selection or a missed debrief, the program is harder to relaunch the second time than it would have been to launch right the first time. This 360 assessment checklist covers the five checks to run before the first invitation goes out.
What is a 360 assessment? A multi-rater feedback exercise where an employee receives input from people above, below, beside, and outside their reporting line, on a defined set of competencies. Used for development, performance, or training-program effectiveness, with very different stakes for each. Run on platforms like Huneety's 360 assessment platform with built-in anonymity, rater-group structure, and report templates.
Why pre-launch matters more than the questionnaire
Most 360 programs fail at the design stage, not the survey stage. By the time the questionnaire is in the field, the success or failure of the program is already mostly determined. The decisions that matter, the purpose, the rater groups, the anonymity guarantees, the debrief plan, all happen before anyone sees a question.
Below is the checklist we run with HR teams in the two weeks before a 360 launches. Five checks, in order. Skip any one of them and the data quality, response rate, or downstream development conversations suffer.
Define the why
Decide whether this is for development, performance, or program effectiveness. The choice changes everything downstream.
Reassure on anonymity
Set the anonymity rules (groups of 2+, line manager identifiable). Communicate them before the invitation.
Select the right raters
Above 1+, Below 2-3, Side 2-3, plus optional Others. Quality of observation, not friendship.
Brief raters on traps
Last-event bias, halo effect, over/under rating. Three minutes of guidance prevents weeks of bad data.
Plan the debrief
Sequence the conversation: HR + line manager first, then the employee. Connect to an IDP within two weeks.
Check 1: define the why
Before any other decision, name what this 360 is for. There are three legitimate purposes, and they have very different stakes.
The choice changes the brief to raters, the report template, the debrief sequence, and the consequences attached to the result. Confusing development with performance, or letting people assume one when you mean the other, is the single most common reason 360 programs lose trust.
Write the purpose in one sentence. Test it on three managers before the invitation goes out. If they read it three different ways, rewrite it.
Check 2: reassure on anonymity
Without anonymity, you don't get feedback. You get politics. Two practices keep the trust intact.
- Only the line manager is identifiable in the final report (alongside HR, who oversees development). Direct reports and peers are reported in groups of two or more. If a rater group has fewer than two responders, their input is folded into a higher group rather than shown alone.
- Communicate the rules before the invitation, not buried in a privacy notice. A two-sentence explanation in the kickoff email, repeated at the top of the questionnaire, sets the contract.
If your platform doesn't enforce these defaults automatically, change platforms before launching. Hand-rolled anonymity always leaks somewhere.
The full 360 picture
The complete guide to 360 assessments
Methodology, rater groups, report structure, four 360 variants, and the five mistakes that kill programs. The full framework behind the checklist above.
Read the guide
Check 3: select the right raters
The participant nominates the raters, with support from line manager and HR. The participant must know who is being asked, and be comfortable with the list. Surprise raters destroy trust faster than any other mistake.
Pick raters who are in the best position to observe the participant's behavior on a regular basis. They don't have to be friends. They have to have seen the work. A peer in another office who only sees the participant in quarterly meetings will provide thinner data than a colleague in adjacent seats every day.
Reach out personally. The participant should send a short note to each rater explaining the program, asking for honest feedback, and thanking them for the time. This single step lifts response rates from around 60% to north of 85% in our experience.
Check 4: brief raters on the rating traps
Three traps appear in nearly every 360 program. A two-minute briefing prevents them. They are worth naming explicitly.
Common rating traps
- Last-event bias: rating from the most recent interaction only
- Over- or under-rating: scores clustered at one end without evidence
- Halo effect: one strong quality (e.g. likeability) bleeding into other ratings
What to do instead
- Ask raters to recall multiple specific moments across 6-12 months
- Require a one-line evidence note next to any rating at the extremes
- Rate one competency at a time, with behavior anchors per scale point
In the rater guidelines, name these three traps explicitly. Then ask raters to base their feedback only on behaviors they have directly observed, and to skip questions where they have no observation rather than guess. "I haven't observed this" is a more useful answer than a 3 out of 5 with no evidence behind it.
Check 5: plan the debrief and the IDP handoff
The 360 report on its own changes nothing. The debrief is what makes it count. Two patterns work; one fails.
- HR + line manager debrief first. The two of them go through the report before the employee sees it, agree on the tone, and prepare the development conversation. Surprises in front of the employee damage trust.
- Line manager debriefs the employee. In a dedicated 1:1, never as part of a regular performance review. HR can be present if the report is sensitive or if the line manager is new to running these conversations.
- IDP within two weeks. The output of the debrief is a development plan, not a filed report. If two weeks pass with no IDP, the program loses its credibility for the next cycle.
The IDP itself follows the 70-20-10 framework: stretch assignments to apply the strengths the 360 surfaced, coaching to address the gaps, and formal learning where it reinforces the other two. For leadership-focused 360s, see 360 feedback for leadership development for the leadership-specific patterns.
Built for HR teams
Run your next 360 on Huneety
Branded reports, automatic anonymity enforcement, rater-group structure, and one-click IDP handoff to the development plan. Replaces homegrown forms and Excel-based scoring.
See how it works
Frequently asked questions
Huneety helps HR teams launching 360 projects and consultants running 360 assessments with end-to-end platform support: rater logistics, branded reports, IDP handoff. Talk to our team about your next cycle.