Most US law firms evaluate their associates rigorously. Multi-source feedback. Quantitative metrics. Self-assessments. Structured documentation. Annual review cycles supported by mid-year check-ins.
Most US law firms evaluate their partners almost not at all.
The asymmetry is one of the open secrets of legal industry HR. A fifth-year associate generating $1.2 million in collections gets seven partners weighing in on her annual evaluation. The senior partner sitting across the table from her, generating ten times that, gets a brief origination report and a handshake at the comp committee meeting. The partner who is quietly making three associates miserable, slowing the practice group’s growth, and driving up lateral departures gets no structured feedback at all — until he is approached about leaving, at which point the conversation is awkward and the data is anecdotal.
Legal Evolution’s 2026 Burning Issues report named “Keeping the partners we want to keep” as the number one priority for US law firm leaders heading into 2026. Across firms of every size, this was the most cited concern. The lateral market is hot. Equity partners are mobile. The partners firms most want to retain are also the partners most easily recruited away. And the firms that lose them most often lose them not to bigger compensation packages, but to the absence of any developmental conversation about what they want next.
The structural problem is that most US law firms have never built a real partner performance review program. The instinct exists — partnership committees know partner development is important — but the execution typically defaults to one of three patterns: nothing at all, a perfunctory year-end production review, or a 360-degree feedback exercise so politically constrained that the data nobody dares share is the data leadership most needs.
This guide is for managing partners, executive committees, and firm leadership designing or redesigning their partner performance review programs in 2026. It covers what partner reviews actually need to measure, the anonymity problem that makes most partner 360s useless, how to use partner review data developmentally rather than punitively, and what the firms retaining their best partners are doing differently.
SRA has designed and run confidential partner performance review programs for US law firms since 1987. The architecture below is drawn from that work and from what we see distinguishing the firms that successfully retain their key partners from the firms that lose them.
What partner performance reviews need to measure that associate reviews don’t
The standard associate evaluation framework — supervising attorney comments, billable hours, work product, citizenship, pro bono — does not transfer to partners. Equity partners are owners, not employees. Their contributions to the firm are structurally different from associates’, and the framework that evaluates them needs to be built around the work partners actually do.
The last four rows are the ones most firm partner reviews skip. Development of others, cultural contribution, strategic contribution, and the connection between performance and compensation. These are also the dimensions where partner-level performance has the largest second-order effects on the firm — on associate retention, on culture, on next-generation rainmaker development, on the firm’s ability to compete for talent and clients in the next five years.
A partner who originates $4 million annually but drives three mid-level associates to leave every two years is not a net positive for the firm — the lost training investment, lost client continuity, and culture damage often exceed the contribution. Most partner review systems do not have a way to surface that math, which is why it usually gets ignored until the consequences accumulate into a crisis.
The anonymity problem: why most partner 360s produce data nobody trusts
Many US law firms have implemented 360-degree feedback programs for partners. Most of those programs produce data that nobody at the firm fully trusts, for a single structural reason: associates will not give honest feedback on partners in a firm-administered system.
Consider the position of a third-year associate asked to provide feedback on a senior partner she works with regularly. The partner controls her work assignments, her mentorship access, her client exposure, her promotion track, and a meaningful share of her compensation potential. The firm tells her the feedback is anonymous. She knows three things at once:
First, anonymity in a small cohort is not anonymity. If only four associates work with that partner regularly, and one of them is going to provide harsh feedback, the partner can probably identify her from style, content, or specific examples. Anyone who has worked in a small group has had the experience of identifying anonymous comments correctly. Associates assume partners can do the same.
Second, the feedback can move through unofficial channels. Even if the formal system is secure, the partner asking her to dinner in three months might mention that “the 360 round was rough this year” in a way that reveals more than the system was supposed to reveal. The associate has no way to verify that this won’t happen.
Third, and most importantly, the cost of being wrong is asymmetric. If she gives soft, complimentary feedback and is identified, nothing bad happens. If she gives honest critical feedback and is identified, she risks her career trajectory at the firm. A rational associate gives the soft feedback every time.
The result is partner 360 data that runs systematically more positive than reality. Partners with real management problems get 360 scores that suggest they are above average. The data exists, the program runs on schedule, and leadership has no idea which partners are actually creating the cultural damage they suspect is happening somewhere in the firm.
The diagnostic test: if your firm has run partner 360s for three years and the data has never surfaced a partner-specific issue that surprised leadership, the program is almost certainly producing politically curated data. Real partner 360 data, in our experience administering these programs, produces surprises in almost every cycle.
There are two architectural fixes that meaningfully change the dynamic.
Independent third-party administration. Feedback is collected, aggregated, and themed by a third party (like SRA) that has no ongoing relationship with the firm’s partners and no incentive to soften the data. The firm sees thematic findings; the third party retains the raw data; individual responses are never identifiable in what reaches the firm. Associates who have experienced both internal and external administration consistently report higher honesty levels in external programs.
Cohort floors that protect anonymity at the data level. Feedback on a partner is only reported when seven or more associates have submitted responses about that partner. Below the cohort floor, no individual-partner data is produced. This protects associates working with smaller partner groups (where anonymity is structurally impossible) and prevents the firm from receiving data on partners whose feedback set is too small to be defensibly anonymous.
Without these two architectural features, partner 360s tend to function more as morale exercises than as diagnostic tools. With them, they become one of the most operationally useful data sources a managing partner has.
“Keeping the partners we want to keep” — what the Legal Evolution 2026 data means for partner reviews
Legal Evolution’s December 2025 Burning Issues report surveyed US law firm leaders across firm sizes about their top priorities going into 2026. The number one priority — the issue that generated the most attention across firms of all sizes — was “keeping the partners we want to keep.”
The framing of the issue matters. Firm leaders are not concerned about partner departures generally; many partner departures are healthy or even desirable. They are concerned about losing the specific partners they most want to retain — the rainmakers, the practice group leaders, the partners whose departure would meaningfully harm the firm’s competitive position.
The conversation among firm leaders in the report repeatedly came back to a single theme: most firms do not actually know what their best partners want. There is no structured developmental conversation, no career planning, no “what would make you stay another decade” inquiry. The lateral departures that hurt most are often partners who would have stayed if the firm had asked them what they needed — and would have known to ask if a real partner review program had been running.
Reading the Legal Evolution findings together with what we see in partner review programs: the firms that retain their best partners are not the ones paying the most. They are the ones running structured, developmental, confidential partner reviews that give partners a forum to talk about what they want next — before a lateral recruiter offers them what the firm should have offered them six months earlier.
Building a partner review program designed for retention, not just measurement
SRA designs and runs confidential partner performance review and 360-degree feedback programs for US law firms. Administered by an independent third party. Cohort-floor anonymity built into the architecture. Developmental in framing rather than punitive. Designed to give managing partners and executive committees the information they need to retain the partners they most want to retain.
Our clients include Am Law firms across New York, Chicago, Los Angeles, Washington D.C., Houston, and Boston. We have administered partner review programs at firms ranging from 40 attorneys to 1,200+, and the architectural principles below are drawn from that work.
→ Schedule a partner review program consultation → Explore SRA’s partner and 360-degree review services
The five components of an effective partner performance review at a US law firm
Partner review programs that work in practice consistently include five components. Programs missing any one of these tend to produce data that leadership either does not trust or does not use.
1. Self-assessment. The partner’s own view of their contributions, gaps, development needs, and career direction. This is often the most useful single input in the entire program. Partners rarely have a forum to articulate what they want next, and the self-assessment provides that forum. It also reveals the gap between self-perception and external perception, which is one of the most useful diagnostic signals in any review.
2. Peer partner assessment. Other partners weighing in on the reviewed partner’s contributions, leadership, integration with the firm, and effectiveness on shared matters. Peer partner data is the input most firms collect best because partners are willing to give it honestly. The challenge is making sure it is structured and comparable, not just anecdotal.
3. Upward feedback from associates and staff. The hardest input to get honestly and the input most firms most need. This is where the anonymity architecture (external administration, cohort floors) determines whether the data is real or curated. When done well, upward feedback surfaces the partner-specific issues that quietly drive associate attrition and culture damage.
4. Quantitative production and origination data. Origination, working hours, realization, write-offs, leverage on associates, practice group profitability. This is the easiest data to collect and the most commonly over-weighted input in partner reviews. Production data is necessary but not sufficient — a partner with strong production numbers and weak everything else is often a net negative for the firm’s future.
5. Client feedback (where possible). Direct client feedback is the most powerful input but also the most difficult to collect without creating discomfort in the client relationship. The firms that do this well typically use light-touch annual or biennial client check-ins administered by a third party, framed as relationship-building rather than partner evaluation. The data feeds the partner review without the client ever being asked to evaluate “their” partner directly.
Together these five inputs produce the multi-source view that supports both developmental conversations and compensation defensibility. Programs running on only one or two of these inputs typically have blind spots that surface as surprises later.
Using partner 360 data developmentally, not punitively
The single most common reason partner review programs fail at US firms is that partners (correctly) suspect the data will be used punitively if it is used at all. When that suspicion takes hold, the cycle starts: peer partners soften their feedback, associates curate their upward responses, self-assessments become PR exercises, and the data the firm collects describes a firm that does not exist.
The firms that have built partner review programs that work in practice frame the program differently from the start. Three framing choices distinguish them.
Framing 1 — The program is developmental by design. Findings are delivered to the reviewed partner in a structured, supportive conversation. The conversation focuses on what the partner wants next and how the firm can help, not on what is wrong with the partner. Compensation and partnership decisions draw on the data but are not framed as the program’s purpose.
Framing 2 — The findings stay with the partner first. In well-designed programs, the reviewed partner sees their findings before anyone else does. They have the opportunity to respond, to identify what surprised them, to flag context the data may not capture. The findings then move to executive leadership only with that prior conversation already in place. This is the opposite of how most firms handle this and is one of the most important trust-building design choices.
Framing 3 — Pattern, not snapshot. The data is most useful when it is read as a multi-year pattern rather than a single-year snapshot. A partner whose scores are trending up over three years is in a different position from a partner whose scores are trending down, even if both have the same current absolute score. Single-cycle data is too volatile to drive major decisions; pattern data is genuinely diagnostic.
Used developmentally, partner reviews build trust, surface real issues, and give the firm the information it needs to make retention investments. Used punitively, they produce defensive partners, curated data, and the eventual perception among partners that the firm’s leadership cannot be trusted with honest input.
What the firms retaining their best partners have in common
Drawing from partner review programs we have administered for US law firms over the past decade, and triangulating against the Legal Evolution 2026 data and broader retention research, the firms that successfully retain their best partners share six practices.
None of these practices is exotic. None of them requires technology the firm doesn’t already have access to. What they require is the architectural commitment to treat partner performance review as a serious, structured discipline rather than a perfunctory year-end activity — and to design the program around the partners the firm most wants to keep, not around the partners the firm is comfortable evaluating.
Three questions to ask before redesigning your partner review program
If your firm is considering building or redesigning a partner review program in 2026, three questions surface the structural issues before the design work begins.
Two or three “no” answers means the redesign needs to start with architecture, not with the review instrument. Most firms try to fix the instrument first, which is why most redesigns produce slightly different versions of the same dysfunction.
Frequently asked questions
How often should US law firms run partner performance reviews? Most leading firms run formal partner reviews annually, with mid-year check-ins on development goals and career direction. Biennial cycles are common at firms early in building partner review infrastructure but lose much of the developmental value of annual cadence. Quarterly is too frequent and produces administrative overload without commensurate value.
Should equity partners and non-equity partners be reviewed the same way? No. Equity partners are owners and the review should be structured around ownership dimensions — strategic contribution, client portfolio, practice group leadership, firm succession. Non-equity partners (income partners) are closer to senior associates structurally and benefit from a review framework that bridges associate-style multi-source input with partner-level career conversations. Using the same instrument for both produces a poor fit for both groups.
How does partner review data connect to compensation? Carefully. Partner compensation systems vary widely — lockstep, modified lockstep, eat-what-you-kill, and various hybrid models. Partner review data feeds compensation decisions but is rarely the sole input. The firms that get this connection right make the relationship between review findings and compensation transparent to partners while preserving the comp committee’s discretion. Firms that obscure the connection produce cynicism among partners about what the review is actually for.
Can the same program review partners and associates? Yes, but the partner program should be architecturally distinct — different inputs, different framing, different administrator, different delivery. A unified “performance program” that treats partner and associate reviews as variations of the same instrument typically produces weak partner reviews. The better architecture is parallel programs with deliberate integration points.
What about partners who refuse to participate in 360 review? This is more common than firms admit, particularly with senior partners. The most effective response is to make participation a firm-wide expectation at the partnership compact level rather than negotiating per-partner. Firms that allow opt-outs typically find the partners who most need development feedback are the partners most likely to opt out. The architectural principle: participation in the partner review program is a partner obligation, not a partner choice.
How does this connect to associate performance reviews and engagement? Tightly. Partner performance and associate performance are interdependent at every level. Partner behavior toward associates is a major driver of associate retention; associate development quality is a major component of partner performance. The firms that run their partner and associate review programs as connected systems produce better outcomes on both sides than firms that run them in isolation. We covered the broader architecture in
What Is the Difference Between a Performance Evaluation and a Performance Review at a US Law Firm?.
Is this different at small firms versus Am Law 200 firms? The principles are the same, the implementation differs. A 40-attorney firm can run a structured partner review program with eight to fifteen partners, full peer assessment, and external administration of upward feedback. A 1,000-attorney firm needs more sophisticated cohort segmentation, practice group-level analysis, and more formal succession planning integration. The architectural commitments — developmental framing, structural anonymity, multi-source input — work at every size.
Sources
- Legal Evolution (December 2025). The 2026 ‘Burning Issues’ Confronting Firm Leaders. legalevolution.org
- NALP Foundation (November 2025). Performance Evaluations Study: A Comprehensive Assessment of Process and Efficacy at 106 Leading Law Firms. nalpfoundation.org
- NALP Foundation (2025). Update on Associate Attrition and Hiring (CY 2025). nalpfoundation.org
- K38 Consulting (2025). Compensation Structure Clarity Survey — 53% of attorneys report unclear pay structures. k38consulting.com
- ABA Journal (April 2025). Associates continue to leave firms within 5 years of hire, new report says. abajournal.com
- Thomson Reuters Institute and Georgetown Law (2026). Report on the State of the US Legal Market. thomsonreuters.com
Related reading on srahq.com
- → Partner Performance Review: How US Law Firms Evaluate Equity Partners in 2026
- → What Is the Difference Between a Performance Evaluation and a Performance Review at a US Law Firm?
- → How Should US Law Firms Separate the Coaching Conversation from the Performance Review Record?
- → What Does eNPS Mean for US Law Firms?
- → Which Employee Engagement Software Should US Law Firms Actually Use in 2026?
- → Attorney Performance Review: A Complete Law Firm Guide (2026)
Partner performance review is the most consistently under-built function in US law firm HR — and the most directly connected to the #1 priority US firm leaders named for 2026. The firms retaining their best partners are not the highest payers. They are the firms running structured, developmental, confidentially administered partner review programs that give partners a forum to talk about what they want before a competitor offers it to them.
SRA designs and runs confidential partner performance reviews, 360-degree feedback, and equity partner evaluations for US law firms. Independent third-party administration with cohort-floor anonymity built into the architecture. Designed developmentally, integrated with associate review and engagement programs, and built specifically for the structural features of US law firm partnerships. Exclusively for US law firms since 1987.
Partner Reviews | Upward Reviews | 360-Degree Feedback | Firm Engagement Survey | Schedule a Consultation
Exclusively serving United States law firms since 1987.


