A managing partner at a 240-attorney firm in Chicago described a moment that captures the problem better than any survey data could.
In December, the compensation committee was reviewing year-end bonuses for a fifth-year associate. The lead partner on her largest matters submitted glowing comments. The bonus number got drafted on that basis. Two days before the recommendation went to firm leadership, an HR director found a series of mid-year coaching notes from a different supervising partner. They flagged concerns about responsiveness, missed deadlines, and a frayed client relationship. None of it had been documented anywhere the compensation committee would have seen it.
The committee had two records of the same associate. One was glowing. One was concerning. Both had been called “her review” at different points in the year. Neither was an evaluation.
This is what happens when a firm runs performance reviews and performance evaluations as if they are the same function. They are not. The mix-up shows up in compensation defensibility, PIP documentation, retention numbers, and the everyday experience of associates who genuinely cannot tell whether they are doing well.
In November 2025, the NALP Foundation released its first comprehensive study of associate performance evaluation practices at 106 leading US law firms. The findings are blunt: even firms with mature processes are blurring these two functions. Firms that have separated them cleanly are running materially better operations, retaining associates longer, and producing evaluation records that hold up when something goes wrong.
SRA has designed and run confidential performance review and evaluation programs exclusively for US law firms since 1987. This guide covers what each term actually means, how the two should function together, what the latest NALP data reveals about US law firm practice, and the specific steps firms should take to fix the gap.
Why the distinction matters more than law firms realize
The terminology problem is not academic. When a firm cannot cleanly distinguish between a performance review and a performance evaluation, three operational failures follow predictably.
Compensation decisions get made on the wrong inputs. A partner who enjoyed working with an associate writes glowing comments. Those comments enter the bonus calculation directly. No structured evaluation aggregates input from the four other partners who saw weaker work earlier in the year. The firm pays out a bonus based on one person’s impression rather than on a documented, multi-source record. When the associate underperforms the following year, leadership has no factual basis for the conversation other than “things have changed.”
PIPs and terminations become legally fragile. When a firm needs to manage an underperforming associate, the documentation has to do real work. If “the review” was a hallway conversation and “the evaluation” was a partner’s brief paragraph submitted under deadline pressure, there is no structured record to support the decision. Wrongful termination claims, EEOC complaints, and bar association inquiries land disproportionately on firms that cannot produce dated, multi-source, behavior-specific evaluation documentation.
Associates do not know where they actually stand. This is the most consequential failure, and the NALP data quantifies it directly.
Source: NALP Foundation, Performance Evaluations Study, November 2025.
By the time the conversation happens, the matters being discussed are months old. The associate experiences the review as stale and the evaluation as a formality. The firm experiences the cycle as expensive without getting the operational signal it should be getting.
The cost is not abstract. According to the NALP Foundation’s 2025 Update on Associate Attrition, 83 percent of associates who departed in 2025 left within five years of being hired, a record high, up from 80 percent in 2024. Lateral associates, associates of color, and associates who didn’t participate in their firm’s summer program left at higher rates than their peers. The average associate attrition rate was 19 percent in 2025; firms with 100 or fewer attorneys had a 24 percent attrition rate. Firms producing stale, decoupled, single-source feedback are not the only cause of this attrition, but they are a measurable contributor.
What a performance review actually is
A performance review is a coaching conversation. Its primary function is developmental: helping an attorney get better at the work, faster.
The format is verbal. The frequency is continuous. The output is behavior change, not a document.
A well-run performance review at a US law firm includes four elements:
Specific, recent, matter-level feedback. Not “your writing has been strong this quarter,” but “the section II analysis in the Henderson brief was the cleanest argumentation I’ve seen from you. Here’s why, and here’s what to apply to the next one.” Fresh feedback on actual work changes behavior. General feedback on aggregated impressions does not.
A two-way conversation about the work environment. Workload, staffing, frustrations, growth opportunities the associate wants to pursue. The review is also where the partner finds out what is actually happening on the matters she is not directly running, like the staffing imbalance with a particular partner, or the matter that is consuming all of an associate’s time at the expense of development.
Concrete next steps. Two or three things the associate should focus on in the next 30 to 60 days. Specific enough that both parties can tell whether they happened by the next conversation. “Improve your client communication” is not a next step. “Send a written status update to the Henderson team every Friday for the next six weeks” is.
Course corrections, not surprises. If something needs to change, the associate hears it in March, not in December. The annual evaluation should not contain feedback that was not surfaced in any review during the preceding year. When it does, the firm has a coaching gap, and that gap is one of the strongest predictors of unwanted attrition.
The cadence varies by firm. Leading US firms run reviews quarterly at minimum, with first- through third-year associates often getting monthly check-ins because development needs are most acute and retention risk is highest. In 2024, nearly three-quarters (74 percent) of associates who departed did so within their first four years at the firm. The coaching window for retention is shorter than most firms operate around.
For a deeper treatment of how continuous review systems are designed for legal environments, see How to Create a Culture of Continuous Feedback in Law Firms.
What a performance evaluation actually is
A performance evaluation is the formal record. Its primary function is decisional: making compensation, promotion, partnership-track, and retention decisions defensible.
The format is documentary. The frequency is periodic, typically annual or semi-annual. The output is a written file that lives in the associate’s record and feeds into firm leadership decisions for years afterward.
The NALP Foundation 2025 study identified a clear pattern in what evaluations actually contain at leading US firms.
Source: NALP Foundation, Performance Evaluations Study, November 2025 (n = 106 firms).
That is the anatomy of a real evaluation: structured aggregation of multiple inputs, written record, dated entry, retained in the file. It is the artifact compensation committees rely on, partnership track decisions reference, and HR files preserve when something needs to be defended.
The evaluation is also the function where US firms struggle most. The NALP data reveals substantial variance in how firms handle feedback attribution: whether comments are credited to specific reviewers or kept anonymous.
Source: NALP Foundation, Performance Evaluations Study, November 2025.
There is no industry standard on attribution. Each firm has to make a deliberate choice. Most have not. The drift between attributed and anonymous feedback within the same evaluation cycle is one of the more common operational failures of the NALP data surfaces.
A practical test: If your firm cannot produce a written, multi-source evaluation document for any given associate within five minutes — including supervising attorney comments, quantitative metrics, self-assessment, and dated entries — your evaluation system is not actually functioning, regardless of how many review conversations have taken place.
→ See how SRA structures end-to-end evaluations
The operational differences between reviews and evaluations
Once the two functions are separated cleanly, the differences become straightforward to manage. The table below summarizes the distinction in a form firms can use directly in policy documents and partner training.
When a partner says “I gave her a review last week,” he typically means a conversation. When HR says “we need her review for the comp meeting,” they mean the document. The terminology fix is small in effort and outsized in impact, because it forces the conversation about which functions actually exist at the firm and which are gaps dressed in shared vocabulary.
Ready to separate reviews from evaluations cleanly?
SRA designs and runs structured evaluation programs and continuous review systems exclusively for US law firms. Our clients include Am Law firms across New York, Chicago, Los Angeles, Washington D.C., Houston, and Boston.
If your firm is running review conversations without a corresponding evaluation infrastructure, or producing evaluation documents without the upstream coaching conversations to support them, we are glad to walk through what a connected system looks like and what it takes to implement.
→ Schedule an evaluation system consultation → Explore SRA’s review and evaluation programs
What the November 2025 NALP data reveals about US law firms
The NALP Foundation’s study is the most current, most comprehensive data set on US law firm evaluation practices. Several findings are directly relevant to any firm benchmarking its own program.
Stated goals are universally aligned. Respondents were overwhelmingly aligned in their primary goals during performance evaluations: associate advancement within the firm scored 100 percent, professional development 97 percent, and identifying and setting future performance goals 94 percent.
Every firm in the study agrees on what evaluation is supposed to accomplish. The variance is entirely in execution. The gap between leading and lagging firms is not strategic disagreement about purpose. It is operational discipline in process.
Cohort comparison is missing at almost a third of firms. Without cohort comparison, an evaluation tells you almost nothing about whether an associate is tracking ahead, on pace, or behind their peers. A “meets expectations” rating is meaningful only relative to a benchmark.
Source: NALP Foundation, Performance Evaluations Study, November 2025.
External vendor satisfaction is moderate at best. Approximately 76 percent of firms use an external or third-party vendor to collect evaluation feedback. Roughly 31 percent use internally developed electronic surveys, and 7 percent use both. Those using external vendors reported only moderate overall satisfaction. The 76 percent figure is a meaningful market signal: current generic-vendor solutions are not meeting law firm needs at the standard firms expect.
AI integration is early and limited. The majority (69 percent) of firms reported they had not yet integrated AI into their performance evaluation process. AI usage was most common for generating performance summaries (18 percent) and analyzing written feedback (11 percent). Firms exploring AI in evaluation work are doing so without industry-standard practices to reference, which creates both opportunity and risk. We covered the operational tradeoffs in
AI in Law Firm HR: What Should Be Automated and What Shouldn’t.
These findings are not from small firms with constrained resources. They are from 106 leading US law firms with the budget and intent to do this well. The execution gap is industrywide.
How reviews and evaluations should function as one connected system
Once a firm has separated the two functions cleanly, the next design question is how they connect. The connection is what most firms have to deliberately build, because nothing in the default operating rhythm of a US law firm produces it automatically.
Continuous reviews feed the evaluation. The review conversations happening throughout the year should not vanish when evaluation season arrives. The notes, observations, and feedback themes from review conversations should be captured, even briefly, in a system that can surface them when the formal evaluation is being compiled. This turns “the year-end evaluation that nobody remembers writing” into “a documented compilation of feedback the associate has already heard in fragments throughout the year.”
The associate experience changes substantially when this connection exists. The evaluation conversation contains nothing surprising because the underlying feedback has been in circulation for months. The evaluation document is robust because it draws on a year of structured input rather than two weeks of memory under deadline pressure.
Structured evaluations aggregate and formalize. Once or twice a year, the review conversations plus quantitative metrics, self-assessments, peer input, and confidential upward feedback from associates on the partners they work for, get compiled into a single structured document. That document drives compensation, partnership track, and development planning decisions.
The aggregation step is where many firms fail. Multi-source input only matters if someone systematically combines it into a usable summary. The 66 percent of firms in the NALP study that include “summaries or compilations” in their evaluations are doing this work. The 34 percent that do not are producing evaluation files that are essentially a stack of disconnected inputs, which compensation committees cannot easily act on.
The conversation closes the loop. The evaluation is delivered to the associate in a structured conversation, not as an email attachment. The conversation gives the associate context for what the data means, allows for clarification questions, and connects the evaluation back to the review themes the associate has been hearing throughout the year. Nothing in the evaluation should be a surprise. If anything is, the firm has a coaching gap to address before the next cycle.
When firms run only the evaluation without the upstream reviews, associates feel ambushed. When firms run only the reviews without the formal evaluation, compensation and promotion decisions feel arbitrary. The system requires both, and the two halves only produce value when they reference each other.
For a complete framework on how the connected system fits into broader law firm performance management, see Attorney Performance Review: A Complete Law Firm Guide (2026).
Where US law firms should start fixing first
For firms auditing their own setup against the NALP benchmarks, the order of operations matters. Doing these in sequence produces compounding improvement. Doing them out of order tends to surface problems faster than the firm can absorb them.
1. Decide which artifact you are producing, and label it accordingly. A conversation is a review. A document is an evaluation. Train partners and HR to use the terms accurately. Update policy documents, evaluation templates, and partner training materials to reflect the distinction. The terminology fix is the smallest effort with the biggest downstream impact, because every other improvement depends on people knowing which thing they are doing at any given moment.
2. Compress evaluation timelines. The total cycle from evaluator start to associate feedback delivery should run under four weeks. Most US firms are well above this. The interventions are infrastructural: distributed evaluation collection, automated reminders, structured aggregation, and a defined service-level expectation on feedback delivery. Firms that get the cycle under four weeks join the leading 14 percent of the industry on this dimension and meaningfully reduce the staleness problem associates flag in exit surveys.
3. Verify multi-source input on evaluations. If your evaluation captures only the lead partner’s view, you do not have an evaluation. You have one person’s opinion in a Word document. The NALP data showing 96 percent of firms using quantitative metrics, 93 percent capturing self-assessments, and most including peer or upward input reflects an industry standard your firm should be meeting. Where any of these inputs are missing, build them in.
4. Decide your stance on attribution. Attributed feedback is more actionable for the associate but generates less candid responses from the partners providing it. Anonymous feedback is more candid but harder to follow up on operationally. There is no universally correct answer. There is a wrong outcome: drift between attributed and anonymous within the same evaluation cycle. Decide once, document the decision, train evaluators on the standard.
5. Separate evaluation infrastructure from generic HR software. Generic HR platforms were designed for corporate hierarchies and employee-employer relationships. They were not designed for partner structures, billable culture, the partner-associate review dynamic, or upward feedback on senior attorneys. The 76 percent of firms reporting moderate satisfaction with their external vendors is largely a function of using tools that were not purpose-built for legal industry evaluation.
For a fuller treatment of why generic HR platforms keep failing in law firm environments and what purpose-built infrastructure looks like, see HR Software for Law Firms: Why Generic Platforms Keep Failing and Performance Management Software for Law Firms: 2026 Buyer’s Guide.
Frequently asked questions
Are performance reviews and performance evaluations legally the same? No. US courts and employment lawyers consistently treat the formal evaluation document as the authoritative record. Conversation-based reviews carry weight only when documented contemporaneously and dated. Firms relying on undocumented reviews to support termination decisions face substantially higher legal exposure than firms with structured, multi-source, time-stamped evaluation files. In wrongful termination matters, the evaluation file is typically the first document subpoenaed.
How often should US law firms run formal evaluations? Most leading US firms run formal evaluations once or twice a year, supplemented by continuous review conversations throughout the year. Annual-only setups are increasingly seen as too slow given how quickly associate sentiment shifts, particularly during the first three years of practice, the period during which 74 percent of unwanted departures occur. Many Am Law firms now run mid-year evaluations or pulse assessments to give leadership earlier signals on retention risk and development needs.
Can the same software handle both reviews and evaluations? It can, but most platforms are built for one or the other. Generic HR tools focus on annual evaluation cycles and miss continuous feedback. Pure feedback applications capture conversations but produce thin evaluation documents. Purpose-built law firm platforms handle both because they were designed around how legal work and partner-associate dynamics actually function. The 76 percent of firms reporting only moderate satisfaction with their current evaluation vendors in the NALP data reflects this gap directly.
What is the difference between an attorney performance evaluation and an attorney performance review? The performance review is the verbal coaching conversation between a supervising attorney and an associate, focused on development. The performance evaluation is the formal written document compiling input from multiple sources — supervising attorney comments, quantitative metrics, self-assessment, and often peer or upward feedback — that drives compensation, promotion, and partnership-track decisions. Reviews are continuous and developmental. Evaluations are periodic and decisional. The same word (“review”) is often used for both colloquially, which is the root of most operational confusion at US firms.
What about evaluating partners? Partner evaluation is structurally different from associate evaluation and requires its own framework. Equity partners are owners, not employees, and the review framework that works for associates produces diplomatic non-answers when applied to partners. We covered the specifics in
Partner Performance Review: How US Law Firms Evaluate Equity Partners in 2026.
Should associates participate in their own evaluations? Yes, and 93 percent of firms in the NALP study already do this through self-assessments. The self-assessment serves two functions: it creates a structured opportunity for the associate to surface accomplishments and context that supervising partners may have missed, and it reveals gaps between how the associate sees their own performance and how the firm sees it. The self-versus-partner gap is one of the most useful diagnostic signals in the entire evaluation process. We covered how to use it in
How does the review-evaluation distinction affect associate retention? Significantly. Firms that run only periodic evaluations without continuous reviews have a coaching gap. Associates do not know how they are performing in real time, and feedback that does arrive feels stale. The NALP Foundation’s 2025 attrition data shows 83 percent of departing associates leave within five years of hire, a record high. Firms with mature continuous review systems supplemented by structured annual evaluations report materially lower unwanted attrition than firms running either function in isolation.
Sources
- NALP Foundation (November 2025). Performance Evaluations Study: A Comprehensive Assessment of Process and Efficacy at 106 Leading Law Firms. nalpfoundation.org
- NALP Foundation (2025). Update on Associate Attrition and Hiring (CY 2025). nalpfoundation.org
- Canadian Lawyer Magazine (November 18, 2025). Most law firm associate evaluations don’t use AI or give speedy feedback: survey. canadianlawyermag.com
- ABA Journal (April 2025). Associates continue to leave firms within 5 years of hire, new report says. abajournal.com
- NALP Foundation (2024). Update on Associate Attrition and Hiring (CY 2024). nalpfoundation.org
Related reading on srahq.com
- → Attorney Performance Review: A Complete Law Firm Guide (2026)
- → Partner Performance Review: How US Law Firms Evaluate Equity Partners in 2026
- → Attorney Self-Assessment Surveys at US Law Firms: How the Self-vs-Partner Gap Reveals What Annual Reviews Miss (2026)
- → HR Software for Law Firms: Why Generic Platforms Keep Failing
- → How to Create a Culture of Continuous Feedback in Law Firms
- → Performance Management Software for Law Firms: 2026 Buyer’s Guide
- → AI in Law Firm HR: What Should Be Automated and What Shouldn’t
Is your US law firm running performance reviews and performance evaluations as one blurred process, or as two distinct systems that connect deliberately?
SRA’s evaluation programs are administered with structural separation enforced at the architecture level: continuous review infrastructure that captures coaching conversations as they happen, structured evaluation cycles that aggregate multi-source input on dated, defensible records, and reporting designed for the way US law firms actually make compensation, promotion, and partnership decisions. Fully managed for United States law firms since 1987.
Upward Reviews | 360-Degree Feedback | Firm Engagement Survey | Evaluation Consultation | All Services
Exclusively serving United States law firms since 1987.
.jpg)

