April 15, 2026

10 Upward Review Questions Every US Law Firm Should Ask Partners in 2026

Shivani Shah

At United States law firms, upward reviews only work if associates believe their responses are genuinely anonymous. That belief is not about trust in partners it is about trust in the architecture. If an associate knows that their rating is stored in the firm’s own HR system, visible to IT administrators, potentially accessible to the managing partner, they will rate every partner at 4 out of 5 and write nothing in the open-text fields. The firm receives data that confirms everyone is performing adequately. No one learns anything. Nothing changes.

The question design matters too. Vague questions produce vague answers. Questions anchored to specific, observable partner behaviours feedback timing, work allocation, accessibility, supervision consistency produce answers that partners can actually act on. The combination of structural anonymity and behaviourally-anchored questions is what makes an upward review program at a US law firm worth running.

The 10 questions below are drawn from SRA’s 30+ years of designing and administering upward review programs exclusively for United States law firms. Each question is anchored to a specific partner behaviour, explained with the underlying data, and paired with the weak version to avoid.

What is an upward review at a US law firm?

An upward review at a US law firm is a structured, anonymous feedback process in which associates and counsel evaluate their supervising partners across defined dimensions typically including feedback quality, work allocation fairness, supervision consistency, accessibility, and professional development support. The critical architectural requirement is that responses are held by an independent third party outside the firm’s own systems, which is what makes associates respond honestly. Upward reviews are distinct from 360-degree feedback: a 360-degree review collects feedback from all directions (supervisors, peers, and direct reports), while an upward review focuses specifically on the associate-to-partner direction.

Why Anonymity Architecture Determines Whether Your Upward Review Data Is Honest

Before the questions: the single most important design decision in a US law firm upward review program is not what you ask it is where the data lives. At American law firms using self-service HR software, associate responses are stored in the firm’s own system. Associates who understand how software works know that data is technically accessible to system administrators. That knowledge suppresses candour, particularly in upward reviews where an associate is rating the partner who controls their compensation, work allocation, and partnership track.

SRA’s upward review program holds all raw response data externally it has never entered a client firm’s internal systems in 30+ years of operation. Partners receive individual reports with benchmark comparisons. The firm receives aggregate data. No raw responses, no individual identifiers, no response pattern that could be traced back to a specific associate ever reaches the firm. This is why SRA’s response rates at US law firms typically exceed 85%  including at small firms where everyone knows each other.

Key Insight: The response rate on an upward review is the quality signal for the data. A 60% response rate means 40% of associates chose not to participate — almost certainly the ones with the most substantive feedback. A 85%+ response rate means you have a representative sample. The difference between those two rates is almost always the anonymity architecture, not the questions.

Why US Law Firms Need Upward Reviews More in 2026 Than in 2025

The 2025–2026 US legal market data creates a specific case for upward reviews as a structural retention instrument, not just a leadership development tool:

Metric Figure Why It Matters for Upward Reviews
Associates who feel their US firm is NOT actively trying to retain them 60% (MLA, 2024) Upward reviews are the primary structural signal that leadership is listening
Associates receiving useful feedback only a few times per year 61% (Thomson Reuters, 2024) Upward reviews surface which specific partners are the feedback gap
Matters resourced by partner preference rather than merit 37% (BigHand, 2025) Work allocation fairness is a core upward review dimension — partners don’t see this data otherwise
Firm-wide lawyer attrition at US law firms 27% (BigHand, 2025) Partners with consistently low upward review scores are the attrition source
Associates who left within 5 years — all-time high 82% (NALP, 2024) Upward review data identifies attrition risk partners 6–12 months before the departure

Amber data point:  The nonequity tier wave reshaping US BigLaw  Sullivan & Cromwell, Freshfields, Sidley, Paul Weiss, WilmerHale, Cleary, Debevoise, Arnold & Porter, all in 2025–2026 is changing the supervision dynamic. Partners who previously managed associates toward equity track are now managing them toward a nonequity ceiling. That shift produces a supervision quality gap that only upward reviews will detect.

The 10 Upward Review Questions Every US Law Firm Should Ask Partners

These questions are written for US law firm associates rating their supervising partners. They are formatted for a Likert scale (1–5, Strongly Disagree to Strongly Agree) with an open-text follow-up option on each. The order matters  start with the less threatening dimensions (clarity, feedback) before moving to the more sensitive ones (fairness, respect).

SRA designs and administers upward review programs for United States law firms.

Including the question framework, anonymity architecture, partner-level reporting, and aggregate firm analysis. Done-for-you. No software for your team to manage. Serving US law firms exclusively for 30+ years.

View the service → srahq.com/services#upward   |   Contact SRA → srahq.com/contact

Question 1

“How clearly does this partner communicate what they expect from you on a matter before work begins?”

Why this question: Unclear expectations at the start of a matter are the single most consistent source of associate frustration in SRA’s exit survey data for US law firms. Associates who do not know what ‘done’ looks like produce work that gets redone, receive critical feedback they experienced as unfair, and disengage from the supervising partner. This question measures the expectation-setting behaviour at the point where it matters most: before the work starts.

What it reveals: Partners who score consistently low on this question are generating hidden rework cost and associate frustration across every matter they supervise. A partner who scores 3.2 out of 5 on expectation clarity across 8 associate responses has a specific, actionable development area that would never surface in a downward review.

Weak version to avoid: “Does this partner set clear expectations?”
Vague questions produce vague answers. The version above prompts a gut-feel rating with no behavioural anchor. Associates cannot give feedback they cannot observe and describe specifically.

Question 2

“How often does this partner provide feedback that is specific enough for you to act on immediately?”

Why this question: The 2024 Thomson Reuters data shows 61% of associates at US law firms receive useful feedback only a few times per year. ‘Useful’ is the operative word — feedback that says ‘good work’ or ‘needs improvement’ does not qualify. This question measures the specificity and timeliness of feedback, which are the two dimensions that determine whether feedback actually changes associate behaviour. It also identifies which partners are the feedback gap at the firm level.

What it reveals: Low scores on this question from multiple associates reporting to the same partner identify a feedback quality problem that the partner may be entirely unaware of. Partners who score high here are the firm’s development assets  their approach to feedback can be documented and used to coach lower-performing supervisors.

Weak version to avoid: “Does this partner give you useful feedback?”
Vague questions produce vague answers. The version above prompts a gut-feel rating with no behavioural anchor. Associates cannot give feedback they cannot observe and describe specifically.

Question 3

“When you need guidance on a matter, how accessible is this partner?”

Why this question: Accessibility is not about physical presence  it is about whether an associate can get a decision or guidance when they need it without feeling they are imposing. At US law firms, inaccessible partners create two compounding problems: junior associates make avoidable errors because they could not get timely input, and they disengage because the cost of asking for help feels too high. This question separates genuine accessibility from the performance of an open-door policy.

What it reveals: Partners who score consistently low on accessibility often do not know they are inaccessible they believe associates should simply send another email or try again tomorrow. The upward review data quantifies the gap between that belief and the associate experience.

Weak version to avoid: “Is this partner available when you need them?”
Vague questions produce vague answers. The version above prompts a gut-feel rating with no behavioural anchor. Associates cannot give feedback they cannot observe and describe specifically.

Question 4

“How fairly does this partner distribute work across associates, relative to their development stage?”

Why this question: BigHand’s 2025 survey of 800+ US law firm leaders found that 37% of matters are resourced by partner preference rather than merit. Associates see this distribution daily. When one associate consistently receives the high-profile client work and another receives document review, the second associate is receiving a clear signal about their standing that no performance review will correct. This question makes work allocation a named, measurable dimension of partner performance.

What it reveals: This is frequently the highest-variance question in SRA’s US law firm upward review data some partners score very high, others very low, and the spread within a single firm is often larger than leadership expects. Work allocation fairness scores correlate strongly with associate attrition risk in SRA’s longitudinal data.

Weak version to avoid: “Does this partner assign work fairly?”
Vague questions produce vague answers. The version above prompts a gut-feel rating with no behavioural anchor. Associates cannot give feedback they cannot observe and describe specifically.

Question 5

“How consistently does this partner follow through on commitments they make to you about feedback timing, career conversations, or work assignments?”

Why this question: Reliability is a distinct dimension from accessibility or feedback quality, and it often explains disengagement that neither of those other dimensions captures. An associate who was promised a client introduction, a career conversation, or a specific review date  and did not receive it updates their model of the partner’s trustworthiness. At US law firms where associates are evaluating whether to stay, every unmet commitment is a data point in the wrong direction.

What it reveals: Partners who score low on follow-through typically have a pattern of intention without execution they mean to have the conversation, provide the feedback, make the introduction. The upward review quantifies the gap between intention and experience across multiple associates.

Weak version to avoid: “Does this partner follow through on what they say they will do?”
Vague questions produce vague answers. The version above prompts a gut-feel rating with no behavioural anchor. Associates cannot give feedback they cannot observe and describe specifically.

Question 6

“How actively does this partner support your professional development  including recommending you for stretch assignments, client introductions, or training opportunities?”

Why this question: Partnership track advancement at US law firms is not driven solely by technical performance  it is driven by sponsorship. Partners who actively advocate for their associates, recommend them for high-visibility matters, and create introduction opportunities accelerate associate development in ways that no formal training program replicates. This question identifies which partners are genuine sponsors and which are supervisors in name only.

What it reveals: At firms with strong Gen Z and Millennial associate retention, the partners who score highest on this question are the retention anchors  associates stay because of the partner they work for, not because of the firm’s brand. Identifying those partners through upward review data lets the firm document and replicate what they do.

Weak version to avoid: “Does this partner help your career?”
Vague questions produce vague answers. The version above prompts a gut-feel rating with no behavioural anchor. Associates cannot give feedback they cannot observe and describe specifically.

Question 7

“How respectful is this partner in their day-to-day interactions with you  in tone, responsiveness, and how they handle disagreement?”

Why this question: Respect in a supervision relationship is not just a culture concern  it is a productivity and legal risk variable. Associates who experience disrespectful supervision produce work defensively, withhold ideas, and avoid raising concerns that the partner needs to hear. At US law firms operating under heightened DEI scrutiny, partner behaviour in one-to-one supervision interactions is also an institutional exposure that upward review data can surface before it becomes an HR matter.

What it reveals: This question often surfaces the steepest divide between how a partner perceives their own interpersonal style and how their associates experience it. Partners who believe they are direct and efficient frequently discover their associates experience them as dismissive or demeaning. That gap is not visible in any other data source.

Weak version to avoid: “Is this partner respectful?”
Vague questions produce vague answers. The version above prompts a gut-feel rating with no behavioural anchor. Associates cannot give feedback they cannot observe and describe specifically.

Question 8

“How well does this partner prepare you for client interactions  including context briefings, role clarity, and post-interaction debrief?”

Why this question: Client-facing preparation is one of the highest-value developmental activities a partner can provide and one of the most inconsistently delivered. Associates who are sent into client calls without adequate context, role briefing, or post-call feedback are being set up to underperform. This question measures a specific, observable behaviour that directly determines how quickly associates develop client-handling competence  a key criterion for every US law firm’s partnership track.

What it reveals: Low scores on client preparation correlate strongly with associate-reported stagnation on the partnership track. Associates who feel they are not developing client skills at the expected rate are significantly more likely to leave for a firm where they believe they will.

Question 9

“How open is this partner to hearing a different view on a matter  including pushback on their approach or strategy?”

Why this question: At US law firms, the quality of legal work is directly related to whether associates feel safe flagging concerns, offering alternative interpretations, or questioning a partner’s strategy. Partners who close down dissent  even subtly, through tone or dismissiveness  create teams that confirm rather than challenge. This question measures psychological safety at the supervision level, which is the dimension most strongly correlated with matter quality in law firm research.

What it reveals: Partners who score low on openness to pushback are identified as risk concentrations in the legal work: their matters are less likely to have had meaningful associate challenge of their approach. That is both a quality risk and a development failure.

Weak version to avoid: “Is this partner open to feedback?”
Vague questions produce vague answers. The version above prompts a gut-feel rating with no behavioural anchor. Associates cannot give feedback they cannot observe and describe specifically.

Question 10

“What is one specific thing this partner could change that would most improve your experience of working with them?”

Why this question: Open-text questions yield data that no Likert scale can capture. The first nine questions identify how a partner scores on defined dimensions. This question surfaces what associates most want to change that the question framework may not have named, a specific behaviour, a pattern, a habit that recurs across multiple associates’ responses. Because SRA anonymises and aggregates open-text responses before reporting to the firm, partners receive thematic summaries rather than individually identifiable comments, which protects response candour.

What it reveals: In SRA’s experience administering upward reviews for United States law firms, the open-text responses to this question are frequently the most actionable data in the entire report. Recurring themes across five or more associates describing the same behaviour give the partner and the firm’s leadership a specific, prioritised development target.

The 10 Questions at a Glance: Dimension, Scale, and Follow-Up

Question Dimension Measured
1. Clarity of expectations before work begins Supervision quality — expectation-setting
2. Specificity and timeliness of feedback Feedback quality — development value
3. Accessibility when guidance is needed Leadership accessibility — responsiveness
4. Fairness of work allocation by development stage Work allocation equity — merit vs preference
5. Consistency in following through on commitments Reliability — trust-building behaviour
6. Active support for professional development and sponsorship Sponsorship — career acceleration
7. Respectfulness in day-to-day interactions Interpersonal conduct — psychological safety
8. Quality of client interaction preparation Development quality — client-skill building
9. Openness to pushback and alternative views Psychological safety — matter quality
10. One specific improvement (open text) Qualitative insight — unstructured signal

What to Do With the Data: Reporting Framework for US Law Firms

The 10 questions above generate two types of output that serve different firm audiences:

Output What It Enables
Individual partner report (scores by dimension vs firm average) Specific benchmarked development targets; partner-level coaching conversations
Aggregate firm report (all partner scores by dimension) Firm-wide supervision gap identification; partner training prioritisation
Cohort segmentation (by practice group or class year) Surfaces whether supervision quality varies across groups or associate seniority levels
Year-over-year trend report (score movement per partner) Measures whether partner development conversations are producing behaviour change
Open-text thematic summary (anonymised and aggregated) Specific behavioural feedback that quantitative scores alone cannot surface


Key Insight: The most valuable report in a US law firm upward review program is the year-over-year trend report. A single cycle tells you where each partner stands. Three cycles tell you which partners are developing and which are not. That longitudinal data is the basis for promotion decisions, coaching investments, and — in cases of persistent low scores combined with high associate attrition from a partner’s team — more significant leadership conversations.

Upward Reviews vs 360-Degree Feedback: Which Does Your US Law Firm Need?

US law firms frequently ask whether to implement upward reviews or 360-degree feedback programs. The answer depends on what the firm is trying to measure and at which tier.

Upward Review 360-Degree Feedback
Associates and counsel rate supervising partners only Feedback collected from all directions: supervisors above, peers, and direct reports below
Best for: partner accountability and supervision quality measurement Best for: senior associate partnership readiness; firm-wide leadership development
Typical US law firm use case: annual partner accountability cycle Typical US law firm use case: associates approaching the partnership decision point
Anonymity architecture is the critical design element — data must be held externally Same anonymity requirement; additional complexity from multi-rater design
Report outputs: individual partner scores + firm aggregate Report outputs: individual target scores across all rater groups + development priorities
SRA recommendation: start here for most US law firms — highest ROI, fastest to implement SRA recommendation: add after upward review program is established for 1–2 cycles

Frequently Asked Questions: Upward Reviews at US Law Firms

1. How many associates need to respond before a partner’s upward review score is valid?

SRA uses a minimum of four responses per partner before reporting individual scores. Below that threshold, individual scores are suppressed and the partner’s data is included only in aggregate firm reporting. This minimum is a data quality requirement — three responses or fewer cannot reliably distinguish a real pattern from noise — but it is also an anonymity protection. At US law firms with very small practice groups, SRA designs the review structure to ensure partner scores are only reported when the response threshold is met, which maintains associate confidence in the anonymity architecture.

2. Should the 10 upward review questions be rated on a scale or answered yes/no?

A five-point Likert scale (1 = Strongly Disagree to 5 = Strongly Agree) is the standard for SRA’s US law firm upward review programs, for three reasons. First, it produces aggregable numerical data that can be compared across partners, practice groups, and review cycles. Second, it provides meaningful differentiation — the gap between a 3.2 and a 4.1 on feedback quality across eight associates is a substantive finding that a yes/no response cannot capture. Third, it allows SRA to benchmark individual partner scores against the US law firm average, which gives each partner context for their results that they would not have from their own firm’s data alone.

3. How do you prevent a partner from identifying which associate gave which rating?

Three design protections work in combination. First, structural independence: SRA holds all raw data externally. Firm leadership, IT administrators, and supervising partners do not have access to individual responses or response patterns. Second, aggregation threshold: scores are only reported when the minimum response count is met, so a partner cannot identify a rating by eliminating the known respondents. Third, open-text handling: SRA anonymises and thematically aggregates all open-text responses before reporting, removing identifying information, personal names, specific matter references, and any language patterns that could identify a respondent in a small group context.

4. What happens to upward review data from year to year at US law firms?

SRA retains aggregated data for longitudinal analysis, which is the primary value of a multi-year program. Year-over-year trend reports show whether a partner’s scores on specific dimensions are improving, stable, or declining after development conversations. This trend data is the most actionable output in the program for US law firm leadership: a partner who scored 2.8 on work allocation fairness in year one and 4.1 in year three has demonstrable evidence of change. A partner who scored 2.8 in year one, had a development conversation, and scored 2.9 in year three is a different leadership conversation. Individual raw responses are never retained after the reporting cycle is complete.

5. How does SRA’s upward review program differ from using a self-service survey tool?

Three differences determine the quality of the data. First, anonymity architecture: a self-service survey tool stores data in a system the firm administers — associates know this and moderate their responses accordingly. SRA holds data externally, which is the structural requirement for honest responses in a hierarchical professional service firm. Second, question design: SRA’s question frameworks are developed and refined over 30+ years of exclusive US law firm practice, calibrated to the specific supervision dynamics of the legal profession. Generic HR survey questions do not reflect the partner–associate relationship. Third, reporting: SRA’s reports are designed for law firm leadership — practice group analysis, year-over-year benchmarking, and open-text thematic summaries are standard outputs. A self-service tool produces a data export. The interpretation work remains entirely with the firm.

SRA Upward Review and Feedback Services for US Law Firms

Survey Research Associates has designed and administered upward review and 360-degree feedback programs exclusively for United States law firms since 1987. All services are fully managed.

Service

What It Delivers

Upward Reviews

Associates rate supervising partners on 10 defined dimensions. Data held externally. Individual partner reports + firm aggregate. Year-over-year trend reporting.

360-Degree Feedback

Full-circle assessment: supervisor, peer, and direct-report ratings. Designed for senior associates approaching partnership readiness decisions.

Firm Engagement Survey

Annual diagnostic segmented by class year. Identifies firm-wide engagement drivers and risk factors before they produce attrition.

eNPS

Quarterly loyalty metric. 6–12 month lead time on attrition. Tracks culture health between annual engagement survey cycles.

Exit Survey

Captures candid departure reasons externally. Identifies supervisor-linked attrition patterns not visible in internal exit interviews.

Self-Assessment Survey

Structured associate self-evaluation to complement upward and downward reviews. Associates who self-assess engage more substantively.

Sources

  • BigHand, “Law Firm Leaders Survey,” 800+ US law firm respondents, 2025
  • NALP Foundation, “Associate Attrition and Law Firm Retention,” 2024
  • Thomson Reuters, “Legal Talent and Career Development Report,” 2024
  • Major, Lindsey & Africa (MLA), Associate Survey on Retention, 2024
  • Partner Track Transparency Report, 2026 — BigLaw equity partner attainment rates
  • Citi/Hildebrandt Law Firm Group, US Law Firm Trends Report, 2026

Related Reading

Ready to run an upward review program your US law firm’s associates will actually respond to honestly?

SRA designs and administers upward review programs exclusively for United States law firms — including the question framework, anonymity architecture, partner-level reporting, and year-over-year trend analysis. Done-for-you. No software. 30+ years serving US law firms.

Upward Review Program → srahq.com/services#upward   |   360-Degree Feedback → srahq.com/services#360

Contact SRA → srahq.com/contact   |   All Services → srahq.com/services

Exclusively serving United States law firms since 1987.

Check Out More Articles!

Transform Your Firm’s Performance Evaluation Today