At United States law firms, the most common complaint about upward feedback programs is not that partners resist them. It is that they produce data too diplomatic to act on. Associates give ratings that cluster between 3.5 and 4.5 out of 5 regardless of actual supervision quality. Open-text comments are carefully worded. Firm leadership receives a report that confirms everyone is performing adequately. And another review cycle ends with no partner development conversations, no supervision quality improvements, and no reduction in the attrition that the program was supposed to help prevent.
This outcome is not a question design problem. It is an architecture problem. The six mistakes in this guide are the specific design failures that produce useless upward feedback data at US law firms — each with the structural fix that SRA’s 30+ years of exclusive US law firm practice has identified as the solution. None of them require a new platform. All of them are implementable in the current review cycle.
Why upward feedback programs fail is almost always architectural, not attitudinal
Partners at US law firms do not, on average, resist upward feedback programs more than partners at other professional service firms. Associates do not, on average, withhold honest feedback more than employees in other hierarchical organisations. The failure of most US law firm upward feedback programs is architectural: the program is designed in ways that make honest responses structurally risky for associates, which produces diplomatic data, which produces useless reports, which produces the conclusion that ‘associates won’t give honest feedback.’ That conclusion is wrong. The architecture is wrong. This guide identifies the six architectural failures and their structural fixes.
The Data Context: Why Getting Upward Feedback Right Matters in 2026
The 6 Mistakes US Law Firms Make With Upward Feedback Programs
Mistake 1
Data Stored in Firm-Administered Systems
Symptom: Participation above 60% but response quality is diplomatically vague and clustered
Why it happens at US law firms: Most upward feedback programs at US law firms are administered through a platform that the firm contracts, configures, and administers internally. Whether it is a self-service SaaS platform, an HR module, or a general survey tool — the data lives in a system that firm IT administrators can access, that the managing partner could theoretically view, and that associates understand is not genuinely independent. Associates at US law firms know how the hierarchy works. They do not need to verify that their responses are individually accessible to make the rational decision that diplomatic answers are safer.
What it produces: Rating compression: scores cluster between 3.5 and 4.5 regardless of actual supervision quality differences. Open-text responses shift to generic praise and carefully worded constructive feedback without naming specific behaviour patterns. The data confirms everyone is performing adequately. No partner development conversations are anchored to the data. The program runs for 1–2 cycles and is quietly discontinued.
Structural fix: External data custody: all raw response data must go to an independent third party and never enter the firm’s own systems. Not ‘the platform promises confidentiality’ — but structurally impossible for firm administrators to access individual responses. This is SRA’s architecture: 30+ years, raw data never in client firm systems. Participation rates at US law firms with this architecture exceed 85%. Participation rates at US law firms using firm-administered platforms typically run 30–60%.
Structural fix → SRA’s upward review program holds all raw data externally. Associates at US law firms know their responses will never be accessible to firm administrators. This is the architectural requirement, not a feature.
Mistake 2
No Minimum Response Threshold Before Individual Scores Are Reported
Symptom: Associates decline to participate or give generic responses when practice groups are small
Why it happens at US law firms: In a practice group with three associates rating the same partner, a response threshold of three means that if all three respond, each associate can count: ‘If I gave a 2 and the report shows an average of 2.7, my response is identifiable within a narrow range.’ In small groups, mathematical inference makes absolute anonymity impossible regardless of what the system promises. Associates in small practice groups at US law firms are the most likely to be supervising the partners they find most problematic — and the most likely to not respond honestly when they can infer individual attribution.
What it produces: Two failure modes: either participation drops in small practice groups (the associates who most need to be heard don’t respond), or participation is maintained but responses are uniformly positive (associates respond but moderate toward the midpoint to avoid mathematical attribution).
Structural fix: Set a minimum response threshold — SRA uses a minimum of four responses before individual partner scores are reported. When fewer than four associates rate a specific partner, that partner’s individual scores are withheld and folded into aggregate practice group data. This protects anonymity in small groups while maintaining data collection. Associates who know the threshold exists — and understand it structurally prevents individual attribution — respond more honestly.
Structural fix → SRA’s minimum four-response threshold is communicated to associates before the survey opens. The threshold is the mechanism that makes the anonymity promise credible in small-group contexts at US law firms.
SRA designs and administers upward review programs exclusively for United States law firms.
Including the anonymity architecture, question framework, and partner-level reporting with firm-average benchmarks that avoid every mistake in this list. 30+ years serving US law firms.
Upward Review Program → srahq.com/services#upward | Contact SRA → srahq.com/contact
Mistake 3
Generic Question Frameworks Not Designed for Law Firm Dynamics
Symptom: Partners score uniformly high on dimensions that don't reflect their actual impact on associates
Why it happens at US law firms: Most upward feedback platforms offer generic 360-degree feedback question templates designed for corporate manager–report relationships. These templates ask about ‘leadership vision,’ ‘cross-functional collaboration,’ and ‘goal alignment’ — dimensions that have no direct translation to the partner–associate dynamic at a US law firm. US law firm associates working with a partner on a complex litigation matter are not primarily evaluating ‘leadership vision.’ They are evaluating whether the partner gives specific developmental feedback, allocates work fairly across the team, provides sufficient context before assigning tasks, and is accessible when guidance is needed.
What it produces: Question framework mismatch produces two outcomes. First, partners score uniformly high on dimensions that don’t reflect their actual supervision impact, while the dimensions that most predict associate retention go unmeasured. Second, associates lose confidence in the program’s ability to capture their actual experience, which reduces response quality in subsequent cycles.
Structural fix: Purpose-built question frameworks for US law firm upward feedback. Core dimensions should include: feedback quality and specificity, work allocation fairness, accessibility and responsiveness, matter briefing and context provision, development support and advocacy, psychological safety, and consistency across different associate cohorts. SRA’s question framework for US law firms has been refined across 30+ years of exclusive legal practice — it was not adapted from a corporate HR template.
Structural fix → SRA’s upward review program uses question frameworks designed exclusively for US law firm partner–associate dynamics. The dimensions that predict associate retention at American law firms are the dimensions we measure.
Mistake 4
Upward Feedback Results Have No Institutional Consequences
Symptom: Participation rates decline in subsequent cycles; associates describe the program as 'box-ticking'
Why it happens at US law firms: US law firm associates are sophisticated observers of their firm’s institutional culture. When upward feedback is collected, reports are generated, and nothing changes — no partner development conversations, no visible response from leadership, no year-over-year score tracking — associates update their belief about whether the program reflects genuine institutional commitment or performative HR administration. The second cycle typically sees declining participation. The third cycle, if it occurs, produces the most diplomatically positive data of all: associates who remain have concluded that honesty has no upside.
What it produces: A virtuous cycle in reverse: useless data leads to no action, which leads to lower participation, which leads to worse data, which leads to less action. The program produces exactly the outcome that opponents of upward feedback programs predicted — not because those opponents were right but because the program was designed in a way that made their prediction come true.
Structural fix: Upward feedback data must carry institutional weight. Minimum requirements: individual partner reports with firm-average benchmarks reviewed in formal partner development conversations, upward review scores included as one input in compensation discussions, year-over-year trend data tracked with explicit improvement targets for partners who scored below the firm average in the prior cycle. Associates need to see evidence that the data changes something before they trust the program enough to give honest responses.
Structural fix → SRA’s firm engagement survey includes questions measuring whether associates believe upward feedback influences leadership decisions. Low scores on this dimension identify whether institutional consequence gap is undermining the upward feedback program at your US law firm.
Mistake 5
No Year-Over-Year Trend Tracking
Symptom: Each cycle produces a snapshot that cannot be acted on; partners with persistent problems are invisible
Why it happens at US law firms: Most US law firm upward feedback programs treat each cycle as a standalone data collection event. A partner who scores 2.9 on feedback quality in 2024 and 3.1 in 2025 has ‘improved’ on paper. A partner who scores 2.9, 2.8, and 3.0 across three cycles has a consistent, persistent problem with feedback quality that is invisible in single-year reporting. Without trend data, partners with persistent supervision quality problems escape the development conversation that would address them — each year presenting a modestly adjusted single-year score that appears to be within normal variance.
What it produces: Persistent low scorers — partners whose scores have not improved across multiple cycles despite their inclusion in the program — are invisible to firm leadership. Concurrently, persistent improvement — partners whose scores have risen significantly following development interventions — is also invisible, removing the positive reinforcement that sustains partner engagement with the program.
Structural fix: Year-over-year trend reporting as a standard output, not an optional add-on. Every partner report should show scores across all available cycles with direction indicators. The development conversation that most changes partner behaviour is: ‘Your feedback quality score has been below the firm average for three consecutive cycles and shows no improvement trend. Here is what we need to do differently.’ That conversation requires longitudinal data. SRA retains aggregated data for all US law firm clients in perpetuity — the longitudinal dataset builds value with every cycle.
Structural fix → SRA’s upward review program produces year-over-year trend reports as standard. Partners who improve receive recognition. Partners with persistent gaps receive the specific, longitudinal evidence that makes development conversations actionable.
Mistake 6
Open-Text Responses Delivered Raw Without Thematic Aggregation
Symptom: Individual comments are identifiable; small group size makes attribution possible; partners react defensively
Why it happens at US law firms: Some US law firm upward feedback programs deliver open-text associate responses directly to the partner being evaluated, redacted only for obvious identifying information. In practice, writing style, specific matter references, and unusual phrasings make individual attribution possible even after basic redaction — especially in small practice groups. When associates perceive that their written responses might be individually identifiable, they shift toward diplomatic language that loses the specific, actionable quality that makes open-text responses valuable.
What it produces: The open-text data — the most valuable part of the upward feedback report, the part that provides the specific context that quantitative scores cannot — becomes the least useful part of the report. Either responses are so heavily redacted that specific context is lost, or they are delivered with enough specificity that associates (correctly) conclude that participation in the open-text section carries individual attribution risk.
Structural fix: Thematic aggregation before delivery. SRA’s organisational psychology team reads all open-text responses for a given partner and synthesises them into a thematic summary: ‘Associates in your practice group consistently describe a feedback timing gap: work is evaluated at year-end rather than at matter completion, which prevents the specific, timely feedback that would enable course correction during the matter itself.’ This synthesis preserves the specific, actionable content of individual responses while making individual attribution structurally impossible. The partner receives the insight without the identity.
Structural fix → SRA’s thematic aggregation process is performed by SRA’s team before any open-text data reaches the firm. Associates at US law firms know their written responses will be synthesised, not delivered verbatim — which is why open-text participation rates in SRA programs exceed the quantitative scale participation rates.
The 6 Mistakes at a Glance: Problem, Symptom, Structural Fix
💡 Key Insight: All six mistakes are architectural. None of them require associates to be braver or partners to be more self-aware. They require the program to be designed in ways that make honest responses structurally safe, institutionally consequential, and analytically useful. SRA’s upward review program for US law firms addresses all six by design.
Frequently Asked Questions: Upward Feedback Programs at US Law Firms
1. Why do most US law firm upward feedback programs produce data that’s too diplomatic to act on?
Most US law firm upward feedback programs produce diplomatically vague data for one primary reason: data custody architecture. When upward feedback data is stored in a platform that the firm contracts and administers, associates at US law firms make a rational calculation: diplomatic answers are safer than candid ones. This calculation does not require associates to believe their responses are actively monitored — it requires only that they cannot rule out the possibility. The power asymmetry between a supervising partner — who controls work allocation, compensation, and the partnership track — and a junior associate making this calculation is sufficient to compress upward review scores to the diplomatically safe range of 3.5–4.5 regardless of actual supervision quality differences. The solution is not better questions or better communication about the program’s confidentiality. It is structural independence: external data custody with a third party that holds raw responses in a system the firm cannot access.
2. What is the most common mistake US law firms make when designing upward reviews?
The most common design mistake is administering the program through a firm-contracted platform while describing it as ‘confidential.’ Associates at US law firms correctly interpret the distinction between ‘confidential within our platform’ and ‘impossible for the firm to access individually.’ These are not the same thing. A platform that the firm administers is a firm-administered system regardless of its privacy settings — and associates at US law firms know this. The second most common mistake is failing to establish institutional consequences for upward review data: when associates see that low partner scores produce no visible development response, they update their belief about whether the program is genuine and participation quality declines in subsequent cycles. The third most common mistake is using generic corporate 360-degree feedback question templates that were not designed for the partner–associate dynamics of US law firms, producing scores on dimensions that do not predict associate retention or supervision quality.
3. How does anonymity architecture determine whether upward feedback at a US law firm is honest?
Anonymity architecture at US law firms determines upward feedback honesty through a specific mechanism: the associate’s assessment of whether their individual response is theoretically traceable. A survey administered through a firm-contracted platform — even one with strong privacy settings — is theoretically accessible to firm administrators. Associates at US law firms who are evaluating whether to give a 2.1 to a supervising partner who controls their work allocation are making a risk assessment, not a policy review. If individual attribution is theoretically possible, they moderate toward safety. External data custody removes theoretical traceability: SRA holds all raw responses in systems the client firm cannot access. When associates know this — and when the minimum response threshold is communicated — the rational calculation shifts: the personal risk of honest response is zero, and the response reflects actual assessment rather than moderated safety.
4. How many associates need to respond before upward review data is reliable at a US law firm?
SRA uses a minimum of four responses from associates working directly with a specific partner before reporting individual partner scores. Below four, individual attribution is mathematically inferable in small practice groups regardless of aggregation — and associates in small groups correctly perceive this. Above four, the aggregation produces scores that cannot be attributed to any individual associate even by inference. The four-response minimum should be communicated to associates before the survey opens: ‘Your individual responses will only appear in reports if at least four associates working with the same partner respond.’ This communication has two effects: it confirms that the anonymity protection is structural, not just a policy promise, and it provides associates in small groups with a mechanism to withhold data from an individual report if they prefer their feedback to be incorporated into aggregate practice group data only.
5. How should US law firms report upward feedback results to partners without identifying individual respondents?
US law firms should report upward feedback results to partners through two layers of analysis: quantitative scores with firm-average benchmarks, and thematically aggregated open-text summaries. The quantitative report shows each partner’s scores on defined dimensions compared to the firm average across all partners, with year-over-year trend indicators where multiple cycles are available. This contextualisation prevents individual score dismissal (‘a 2.9 is fine’) by showing whether the score is above or below the firm average and whether it is improving or declining. The open-text summary is produced by SRA’s team through thematic synthesis — reading all open-text responses for a given partner and generating a written summary of patterns without quoting individual responses. This synthesis preserves the specific, actionable content of individual feedback while making individual attribution structurally impossible. Partners receive the insight without the attribution.
SRA Services That Address Upward Feedback Design at US Law Firms
External data custody + 4-response threshold + purpose-built questions + thematic aggregation + benchmarked partner reports + year-over-year trends. All six mistakes addressed by design.
All 6
Includes question measuring whether associates believe upward feedback influences leadership decisions. Identifies institutional consequence gap.
Mistake 4
Multi-rater design for partners in leadership development cycles. Rater group gap analysis adds context to upward review scores.
Mistakes 3, 5
Confirms which upward feedback dimensions drove each departure. Validates which dimensions to weight most heavily in future cycles.
Mistake 4 (consequence)
Quarterly. Drops in eNPS by practice group identify which partner’s team is accumulating risk between annual upward review cycles.
Mistake 5 (interim signal)
Sources
- BigHand, “Law Firm Leaders Survey,” 800+ US law firm respondents, 2025
- Thomson Reuters, “Legal Talent and Career Development Report,” 2024
- Major, Lindsey & Africa (MLA), Associate Survey on Retention, 2024
- NALP Foundation, “Associate Attrition and Law Firm Retention,” 2024
Related Reading
- 10 Upward Review Questions Every US Law Firm Should Ask Partners in 2026
- 360-Degree Feedback for US Law Firms: A Complete 2026 Guide
- Why US Law Firm Leaders Need Upward Reviews in 2026 — The Data Case
- 7 Law Firm Leadership Red Flags That Drive Associate Attrition at US Firms — 2026
Is your US law firm’s upward feedback program producing data you can act on — or data that confirms everyone is fine?
SRA designs and administers upward review programs exclusively for United States law firms. All six design mistakes in this guide are addressed by SRA’s architecture: external data custody, minimum response thresholds, purpose-built question frameworks, institutional consequence integration, year-over-year trend reporting, and thematic open-text aggregation.
Upward Reviews → srahq.com/services#upward | 360-Degree Feedback → srahq.com/services#360
Firm Engagement Survey → srahq.com/services#firm | Contact SRA → srahq.com/contact
Exclusively serving United States law firms since 1987.


