On this page
Why peer data is the highest-signal rater group
Managers see upward behaviours: the deck, the status update, the conversation the assessee has in front of the manager's peers. Direct reports see downward behaviours: how the assessee manages, delegates, and coaches. Peers see horizontal behaviours: how the assessee collaborates across teams, handles conflict, runs working sessions, shows up in cross-functional meetings. For most mid-career competencies, the peer perspective is where the development priorities actually live.
- 1Competencies where peers outperform Cross-functional collaboration, conflict resolution, meeting facilitation, information sharing, psychological safety, technical mentoring (peer-to-peer). Manager ratings on these often miss the behaviour entirely.
- 2Competencies where managers outperform Strategic thinking, executive communication, business acumen, decision quality under ambiguity. Peers often do not see the manager-layer conversations where these play out.
- 3Convergence is the gold standard When manager, peers, and direct reports all converge on a gap, the development priority is non-negotiable. When they diverge, the conversation has to happen. Either way, the triangulation produces signal.
The 360 rater stack
Five perspectives on the same competencies. Each group sees behaviours the others cannot.
Self
Own view of capability and intent
1 rater
Manager
Upward behaviours: how the assessee presents and decides
1 rater
Peers
Horizontal behaviours: collaboration, conflict, cross-team work
3 raters minimum
Direct reports
Downward behaviours: management, coaching, delegation
3 raters minimum
External
Cross-org view: customers, partners, cross-functional peers
2-3 raters, optional
Groups below 3 raters merge with peers for anonymity. Same competencies, same scale, across every group.
How to pick peer raters
Peer rater selection is where 360 programs quietly fail. Raters the assessee hand-picks with no review produce friendly-fire data. Raters HR assigns in a vacuum produce low-context data. The combination that works is assessee-proposed, manager-reviewed, HR-balanced.
- 1Assessee proposes 4 to 6 peer names Instruct the assessee to include at least 2 peers they do not naturally work well with. If they only propose friends, the feedback becomes noise. Make the ask explicit: pick peers who will give you honest feedback, not confirming feedback.
- 2Manager reviews the list The manager cross-checks the proposed list for balance (functions, seniority, relationship mix). They can add or remove but should not replace more than 2 of the original picks. Replacing the whole list signals distrust and the assessee knows.
- 3HR checks for the minimum-3 rule A peer group of 2 merges with the external group or gets dropped from the report because anonymity cannot be preserved. A peer group of 3 to 6 is the operating range. Above 7, rater fatigue and diminishing returns kick in.
- 4Exclude direct reports from the peer pool Direct reports have their own rater group (with its own anonymity rules and its own observational bias). Mixing them into the peer pool contaminates both groups. If the assessee does not manage anyone, the direct-report group is simply not used.
How peer anonymity actually works
Peer honesty is load-bearing on the anonymity model. If peers suspect their individual ratings can be traced back to them, they will sanitise. On Huneety, peer anonymity works at two levels: the group average requires at least 3 raters before it displays, and written comments are paraphrased before they reach the report so identifiable phrasing does not leak.
- 1Minimum 3 raters per group Below 3, Huneety merges the peer group with the external group (if present) or suppresses the group average entirely. The report shows a note: 'insufficient raters for group anonymity'.
- 2Comments paraphrased by Huna AI Raw peer comments often contain identifying phrasing ('when we worked on the Q3 launch together'). Huna rewrites the comment to preserve the substance while removing the identifier. The assessee sees the point; they do not see who made it.
- 3Token-based access, no login Raters click a secure link, rate, submit. No password, no account. The token expires after 7 days. No rater identity is stored beyond the invitation record, and the invitation record is separated from the rating record at ingestion.
Peer anonymity built in, not bolted on
Token-based access, minimum-3 group rule, AI-paraphrased comments. Peers answer honestly because the model is trustworthy by design.
Three failure modes in the peer round
The same three patterns recur across every first 360 we run with a new customer. Screen for them before launch and the peer round produces signal.
- 1All peers picked from one team If every peer sits on the assessee's own team, the feedback is in-group biased. The peer set should span at least 2 teams or functions when the assessee works cross-functionally.
- 2The passive-aggressive peer Occasionally one peer uses the open-text as a performance-review venue. The Huna paraphrase layer softens this, and the debrief should discount clear outliers. If 3 of 5 peers converge and 1 dissents hard, the 3 are the signal.
- 3Peer round during a crunch Launching the peer round during quarter-end, product launch, or a restructure tanks response rates. Aim for a calm 2-week window. If the calendar refuses to cooperate, push the cycle rather than push through.
Peer feedback on Huneety
Huneety runs the peer round on the same campaign as every other rater group. Invitations go out via token link (no login for peers), reminders fire automatically, responses aggregate by group with the minimum-3 rule enforced, and comments are paraphrased by Huna before reaching the report. HR sees real-time response rates per group. See the Huneety 360 platform.
QUICK ANSWERS
Quick answers
- How many peer raters do I need?
- 3 is the minimum for group anonymity. 4 to 6 is the operating range. Above 7, response rates drop and rater fatigue becomes the bottleneck. Quality beats quantity: 4 peers who see the assessee from different angles beats 8 peers who all sit on the same team.
- Who picks the peer raters?
- Assessee proposes, manager reviews, HR balances. Each step adds something the other cannot: the assessee knows who has seen their work, the manager knows who will be candid, and HR enforces the anonymity and balance rules across the whole cohort.
- Can peer ratings be anonymised fully per individual?
- No, and you would not want that. Per-individual anonymity makes the ratings untraceable even to the HR team running the program, which means no way to check for rater fatigue or missing submissions. Group-level anonymity (minimum 3 raters, paraphrased comments) is the right balance.
Continue learning
360 self-evaluation guide
How self-ratings behave in a 360, why they overstate by half a level, and how to run a self-eval that produces useful data.
Read the guide
360 assessment report guide
What a Huneety 360 report contains page by page, how to read the radar, the SWOT, the blind spots, and the perception-gap matrix.
Read the guide
Culture fit assessment guide
How to run a culture-fit 360 using observable behaviours instead of personality inventories, and when it actually makes sense.
Read the guide