May 1, 2026

How to Run Performance Reviews at a Small US Law Firm: A Practical Guide for 2026

Shivani Shah

Most small US law firms either skip formal performance reviews entirely or run them inconsistently. The reviews happen once every 18 months when a managing partner has the time. Or they happen scattered across a quarter when bonus decisions force the issue. The reason is rarely a lack of conviction. Small firm leadership generally agrees that structured reviews would help.

The reason is operational.

The available templates are designed for firms five to ten times the size of a typical small firm. The available software is built around HR functions small firms do not have. The consultants who design these programs typically do not work below the AmLaw 200.

The result is that small US law firms operate with the least structured performance feedback infrastructure of any segment in the legal market. This is the segment with the highest proportional cost per attorney departure per the 2025 BigHand data. A 25 attorney firm losing a single mid level associate absorbs roughly $750,000 in attrition costs. That figure represents a meaningfully larger proportion of the firm's overall economics than the equivalent loss at an AmLaw 100 firm.

The case for getting performance reviews right at small firms is, in proportional terms, stronger than at large firms. Not weaker.

SRA has designed and run confidential performance review programs exclusively for law firms in the United States since 1987. Our small firm clients operate across New York, Chicago, Los Angeles, Houston, Atlanta, Boston, Washington D.C., and dozens of regional markets. This guide covers how to actually run a performance review process at a small US law firm. What to evaluate. How to structure the cadence. Who should conduct the conversations. How to handle the structural challenges that make small firm reviews harder than the templates assume.

What "small US law firm" means

For the purposes of performance review design, "small" in the US legal market refers to firms with roughly 5 to 50 attorneys. The boundary at 50 is operationally meaningful.

Above that size, firms typically have at least one dedicated HR resource. They have a more formal partnership structure. They have the institutional capacity to run reviews on infrastructure designed for larger firms.

Below 50, the firm is usually operating with distributed administrative responsibility. Partnership decision making is informal. Review infrastructure has to fit inside the actual operating capacity of the firm.

Within that range, the design principles below apply broadly. Firms at 5 to 15 attorneys will naturally run a lighter weight version of the process. Firms at 30 to 50 will run something closer to what mid size firms do. The structural advice is the same.

Why small US law firms struggle with performance reviews

Three structural realities make performance reviews harder at small firms than the standard templates assume.

The HR function is part time, distributed, or non existent. At a 25 attorney firm, performance reviews are typically run by an office manager. Or by a managing partner who took on the responsibility informally. Or, increasingly, by no one consistently.

The annual review process requires roughly 40 to 60 hours of administrative time at small firm scale. Scheduling. Distributing forms. Collecting responses. Aggregating data. Preparing review documents. Scheduling conversations. At a firm with no dedicated HR resource, those hours come out of someone whose primary job is something else. The natural consequence is that the cycle gets compressed, skipped, or run with deteriorating quality after the first year.

Anonymity is structurally harder. A 25 attorney firm with four senior associates and three supervising partners cannot deliver upward feedback that is genuinely anonymous through any internal system. Associates know each other's matter assignments. They know their partners' supervision styles. They know the small team dynamics that make individual responses identifiable even when names are stripped from the data.

The default response in this environment is diplomatic feedback that produces no operational value. The only architectural fix is independent third party data collection with minimum respondent thresholds before any individual level reporting. For more on anonymity design, see How to Make Feedback Anonymous at a Law Firm and Confidential Upward Feedback in Small Law Firms: How to Build Trust That Sticks.

The cost of getting it wrong is concentrated. At a 200 attorney firm, losing one strong associate is one departure among many. At a 25 attorney firm, the same departure represents 4 percent of the firm's lawyer headcount. The operational disruption is concentrated in whichever practice area the associate worked in. Small firms that lose two associates in the same practice area in the same year often lose the practice area's deliverability for the next 12 to 18 months.

The economic concentration of attrition risk at small firms is one of the under discussed strategic realities of small firm management. The case for treating performance reviews as a leadership priority follows from it directly.

The seven steps of running performance reviews at a small US law firm

The process below reflects how SRA's small firm clients across the United States actually run their review programs. The steps are scaled to small firm operational realities. They are designed to surface honest feedback despite the anonymity challenges of small populations. They are structured to be sustainable over multiple cycles.

Step 1: Decide the cadence before designing anything else

The single most important upfront decision is review frequency. Three patterns work well at small US law firms.

The first pattern is annual full reviews with no formal mid year process. This is the simplest pattern and the lowest operational lift. It is appropriate for firms at 5 to 15 attorneys with stable practice mixes and limited administrative capacity.

The second pattern is semi annual reviews. A lighter weight summer check in plus a fuller annual review. This is the most common pattern at small firms with 15 to 50 attorneys.

The third pattern is continuous feedback. Formal review documentation is collected at quarterly intervals and consolidated annually. This is the most data rich pattern but requires either a managed service or sustained internal administration that most small firms cannot reliably provide.

The wrong cadence choice is the most common cause of small firm review program failure. Firms that try to run continuous feedback without the administrative capacity reliably abandon it after one cycle and end up with no formal review process at all. Firms that run annual only reviews at 30+ attorneys lose visibility into mid year attrition risk that more frequent check ins would surface.

The right cadence is the one your firm can actually sustain over three review cycles. For more on cadence design, see Continuous Feedback at US Law Firms: Why It Beats Annual Reviews.

Step 2: Define what you are measuring

Performance review programs that fail at small firms almost always fail at this step. The default is using a generic competency framework adapted from corporate HR templates. This produces review data that is either too generic to be useful or too detailed to be sustainable.

Too generic looks like every associate scoring 4 out of 5 on "professionalism." Too detailed looks like 45 line competency rubrics that no one will fill out twice.

Effective small firm review frameworks measure across four dimensions. Each dimension has three to five behavioral indicators.

The four dimensions are:

  1. Legal craft. Technical competence. Work product quality. Judgment.
  2. Client and matter management. Responsiveness. Scoping. Delivery.
  3. Collaboration and supervision. With peers. With partners. For senior associates, with juniors.
  4. Development trajectory. Initiative. Learning. Growth in role over the review period.

Within each dimension, the behavioral indicators should be specific to legal practice rather than corporate generic. "Drafts client communication that requires minimal partner editing" is operationally useful. "Demonstrates strong written communication skills" is not.

Build this framework once. Use it consistently across three review cycles. Small firms that do this produce review data of materially higher quality than firms that revise the framework every cycle or use a different framework for different practice groups.

Step 3: Combine self assessment with multiple rater input

A review based on a single supervising partner's input is single rater evaluation. It is subject to all the limitations of single source data. Effective small firm reviews combine three input streams.

Self assessment is the first stream. The attorney being reviewed completes a structured reflection on their performance against the firm's competency framework. Specific examples are required, not just ratings. Self assessments completed before partner reviews (rather than after, which is the more common pattern) produce materially better data. They surface gaps between self perception and partner perception. Those gaps are often the most useful conversation starters in the entire review process. For more, see Attorney Self Assessment Surveys at US Law Firms.

Multi partner input is the second stream. At least three to four supervising partners per associate per year submit structured feedback. Ideally this happens at matter completion rather than once at year end. The variation across partner inputs is itself diagnostic. An associate who scores consistently across all supervising partners is one situation. An associate who scores well with some partners and poorly with others is a different situation. The difference is almost always more about supervisor fit and work allocation than about the associate's underlying performance.

Confidential upward feedback is the third stream. Associates provide structured input on the partners they work with. This is the input most often skipped at small firms because of the anonymity challenges discussed above. It is also the input most operationally useful for surfacing supervision quality issues and work allocation patterns that financial metrics cannot capture.

Done well, with independent third party administration, upward feedback is the highest value data stream in any small firm review program. Done poorly, administered internally with feature level anonymity, it produces diplomatic data that wastes the participants' time. For specific upward review questions designed for law firm environments, see What Questions Should a Law Firm Ask in an Upward Review?.

Step 4: Run the review conversation, not just the review document

The most common small firm review failure is treating the review document as the deliverable. The associate completes a self assessment. The supervising partner writes narrative comments. The document is signed and filed. The conversation happens, if it happens, as a brief debrief at the end of an unrelated meeting.

The data is captured. The development conversation that the data was supposed to enable does not happen.

Effective small firm reviews treat the review document as the input to a structured conversation. Not the output of the process.

The conversation runs 45 to 60 minutes. It is scheduled deliberately. It follows a structure:

  1. Review of the data (10 to 15 minutes)
  2. Discussion of two or three specific developmental priorities for the next cycle (15 to 20 minutes)
  3. Agreement on concrete actions and supports the firm will provide (10 to 15 minutes)
  4. Explicit acknowledgment of strengths the firm wants the attorney to continue building on (5 to 10 minutes)

The conversation matters more than the document. Firms that get this step right produce review processes their attorneys describe as developmental. Firms that get it wrong produce processes their attorneys describe as bureaucratic.

Step 5: Calibrate across the partner group

At firms above 15 attorneys, partner level calibration is the step that converts review data from individual attorney information into firm management information.

Calibration is a structured conversation among supervising partners. It addresses three questions. Are ratings being applied consistently across the firm? Does a 4 out of 5 on "client communication" mean the same thing in the corporate group as in the litigation group? Are the partners with reputations as easy graders systematically giving higher scores than partners with reputations as harder graders? Do the work allocation patterns surfaced in the data reveal practice group level issues that need leadership attention?

Calibration sessions also create a structural defense against rating drift over time. Rating drift is the gradual upward creep of all ratings over multiple cycles. It produces compressed review data within three to five years.

Firms that calibrate produce review data with usable variance over time. Firms that do not calibrate produce review data where everyone is "exceeding expectations" by year four. For more on how partner level calibration works in practice, see Partner Performance Review: How US Law Firms Evaluate Equity Partners in 2026.

Ready to run performance reviews at your small US law firm without the administrative overhead?

SRA designs and runs confidential performance review programs exclusively for law firms in the United States. Our services include upward reviews, 360 degree feedback, engagement surveys, exit surveys, and self assessments. Our small firm clients operate across New York, Chicago, Los Angeles, Houston, Atlanta, Boston, Washington D.C., and regional US markets.

SRA's managed service model means small firm leadership does not run software. It does not absorb the 40 to 60 hours of administrative time per cycle that internal review programs require. SRA designs the program, administers it independently, analyzes the data, and delivers reports calibrated to your firm's size and structure.

If your firm is running performance reviews informally, has tried to implement a generic HR platform that produced diplomatic feedback, or is replacing Litera's discontinued Top Performance product, we are glad to walk through what a fully managed program looks like at small firm scale.

Schedule a small firm consultationExplore SRA's program suite

Step 6: Document the outcomes without enterprise overhead

Small US law firms still need defensible review documentation. For partnership track decisions. For occasional terminations. For the rare employment dispute. For the routine question of "what did we agree on last year?" that comes up in every subsequent review conversation.

The challenge at small firms is producing documentation that is operationally sufficient without imposing enterprise style HR overhead.

The realistic standard captures three things:

  1. The data inputs (self assessment, partner reviews, upward feedback)
  2. The developmental priorities agreed in the review conversation
  3. The specific actions and supports the firm committed to

Anything beyond this is over engineered for small firm scale. Anything less leaves the firm without the institutional memory it needs across review cycles.

The most common documentation mistake at small firms is the opposite of over engineering. It is skipping documentation entirely. Reviews are conducted as conversations. No written record is produced. Twelve months later neither party can remember exactly what was agreed.

Establish even a lightweight documentation discipline. A one page summary of each review conversation, signed by both parties, retained centrally. Firms that do this produce materially better continuity across cycles than firms that do not.

Step 7: Connect review outputs to the rest of firm management

The final step is often the missing one. Performance review data at small firms tends to live in isolation. It is collected. It is discussed in the review conversation. It is filed. Then it is disconnected from the rest of how the firm makes management decisions.

Effective small firm review programs connect the data to the operational decisions the data should inform. Work allocation patterns. Partnership track conversations. Compensation discussions. Lateral hiring priorities.

The connection should not be mechanical. Review data should not flow directly into bonus calculations. That pattern damages review data quality, as discussed in Why Culture Beats Pay for Retaining Mid Level Associates.

But review data should inform the leadership conversations that shape compensation, partnership, and work allocation decisions. Firms that maintain the connection (using review data to inform but not determine the strategic decisions) produce review programs that participants take seriously over multiple cycles. Firms that disconnect review data from strategic decisions produce review programs that participants gradually disengage from.

Common small firm review pitfalls and how to avoid them

Five specific failure patterns recur across small US law firms. Each is preventable with deliberate design choices.

The single rater review. A review based entirely on the supervising partner's input is single source data. It reflects that partner's specific view of the associate, not the associate's actual performance. The fix is multi partner input even at small firms. Three to four supervising partners per associate per year, even if the firm has only seven partners total.

The rating drift problem. Over three to five cycles, all ratings compress upward as partners avoid difficult conversations. The fix is calibration sessions that surface inconsistent rating practices and reset the firm wide application of the rating scale.

The diplomatic upward feedback trap. Internally administered upward reviews at small firms produce uniformly positive feedback. Associates correctly infer that their input is structurally accessible to firm leadership. The fix is independent third party administration with minimum respondent thresholds before any individual level reporting.

The conversation skipped review. A completed review document with no real conversation produces a process that participants describe as bureaucratic. The fix is treating the review conversation as the deliverable and the document as the input.

The disconnected review program. Review data that does not inform any subsequent firm decision becomes a process participants disengage from. The fix is connecting review outputs to the management decisions the data should inform, without mechanically translating ratings into compensation outcomes.

For more on small firm review failure patterns, see 5 Performance Review Mistakes Small US Law Firms Keep Making and Fair Performance Reviews for Small Law Firms: How to Design a System That Works.

Frequently asked questions

How often should a small US law firm run performance reviews?

Three cadences work well at small firms. Annual full reviews, with no formal mid year process, suit firms at 5 to 15 attorneys with stable practices and limited administrative capacity. Semi annual reviews (a lighter summer check in plus a fuller annual review) are the most common pattern at firms with 15 to 50 attorneys. Continuous feedback supplemented by quarterly formal documentation provides the richest data but requires sustained administrative capacity that most small firms can only access through a managed service. The right cadence is the one the firm can actually sustain over three review cycles. Ambitious cadences abandoned after one cycle produce worse outcomes than modest cadences sustained reliably.

Who should conduct performance reviews at a small US law firm?

Reviews should be conducted by the supervising partner with the most consistent visibility into the attorney's work over the review period. This is supplemented by structured input from at least two additional partners who have worked with the attorney during the cycle. At firms below 15 attorneys, the managing partner often participates in every associate review as a calibration mechanism. At firms above 15, calibration happens through structured partner group sessions rather than through one partner participating in every review. Independent third party administration of upward feedback and engagement surveys runs in parallel. The data is collected externally and reported back to firm leadership in aggregated form.

Can a 10 attorney firm run upward reviews with genuine anonymity?

Yes, but only with independent third party administration and minimum respondent thresholds. At firm sizes where associates can identify each other's response patterns, anonymity cannot be produced by feature toggles inside platforms whose data lives in firm systems. Associates correctly infer that the data is structurally accessible to firm leadership and respond diplomatically. The architectural fix is data collection by an independent administrator. Raw responses held externally. Minimum respondent thresholds (typically four to five) before any individual partner level reporting. With these elements in place, even firms with 10 attorneys can produce honest upward feedback. Without them, the feedback will be diplomatic regardless of the platform's anonymity claims.

How long should a performance review conversation last at a small law firm?

Effective review conversations run 45 to 60 minutes. The structure is: review of the data inputs (10 to 15 minutes), discussion of two or three specific developmental priorities for the next cycle (15 to 20 minutes), agreement on concrete actions and supports (10 to 15 minutes), and explicit acknowledgment of strengths (5 to 10 minutes). Conversations under 30 minutes typically do not allow space for the developmental discussion to produce specific commitments. Conversations over 90 minutes typically lose focus and produce diffuse rather than actionable outputs.

Should small US law firms use software for performance reviews or run them on their own?

The choice depends on whether the firm has internal administrative capacity to run the cycle reliably. Self service performance management software requires roughly 30 to 50 hours of internal administration per cycle. At a firm with no dedicated HR resource, that capacity comes out of someone whose primary job is something else. The typical pattern is that the cycle gets compressed or skipped after the first year. Fully managed services like SRA absorb the administrative work entirely, which is operationally significant at small firm scale. Generic HR platforms can work at small firms with dedicated administrative capacity but reliably underperform in firms without it. For more, see Best Performance Management Software for Small US Law Firms in 2026.

What is the cost of not running structured performance reviews at a small US law firm?

The 2025 BigHand Navigating the Million Dollar Problem report places the cost of losing a single mid level associate at a small US firm at approximately $750,000. This figure covers recruiting, training, productivity ramp time for the replacement, lost institutional knowledge, and the matter specific costs of mid stream associate transitions. At a 25 attorney firm, two such departures represent more than 13 percent of the firm's lawyer headcount and a meaningful operational disruption. Structured performance reviews are one of the few firm management interventions that addresses the structural drivers of associate attrition. Career path opacity. Feedback quality. Work allocation fairness. Partnership visibility. The proportional economic case for getting reviews right at small firms is stronger than at AmLaw firms, not weaker.

Sources

  1. NALP Foundation (2024). Update on Associate Attrition and Hiring, CY 2024. 119 US and Canadian firms. https://www.nalpfoundation.org
  2. BigHand (2025). Navigating the Million Dollar Problem: Resourcing for Profitability, Client and Talent Retention. 800+ law firm leaders. https://www.bighand.com
  3. Thomson Reuters Institute and Georgetown Law (2026). 2026 Report on the State of the US Legal Market. https://www.thomsonreuters.com
  4. American Bar Association (2025). Legal Technology Survey Report. https://www.americanbar.org
  5. BCG Attorney Search (2026). 2026 Legal Talent Movement Report. https://www.bcgsearch.com

Related reading on srahq.com:
Fair Performance Reviews for Small Law Firms: How to Design a System That Works
5 Performance Review Mistakes Small US Law Firms Keep Making (And How to Fix Them)
Confidential Upward Feedback in Small Law Firms: How to Build Trust That Sticks
How Better Feedback Systems Prevent Burnout in Small Law Firms
Attorney Performance Review: A Complete Law Firm Guide (2026)
What Questions Should a Law Firm Ask in an Upward Review?

Is your small US law firm running performance reviews informally? Or trying to scale a process designed for larger firms into operational reality your firm cannot actually sustain?

SRA's performance review programs are designed and administered exclusively for law firms in the United States. Our services serve firms across the AmLaw 200 and regional US markets including small firms. Independently administered. Structurally anonymous. Managed without the 40 to 60 hours of internal staff time per cycle that internal programs require. Fully managed for law firms in the United States since 1987.

Upward Reviews | 360 Degree Feedback | Firm Engagement Surveys | Exit Surveys | Self Assessments | Schedule a Small Firm Consultation

Exclusively serving law firms in the United States since 1987.

Check Out More Articles!

Transform Your Firm’s Performance Evaluation Today