The conventional concern about AI at US BigLaw firms is that it will replace lawyers. The more immediate problem — the one that is already affecting first-year development in 2026 — is that AI has eliminated the entry-level work that taught first-years how to think like lawyers. Document review, contract redlining, research memos, discovery organisation: these tasks were never primarily valuable as outputs. They were valuable as training. The first-year associate who spent 80 hours reviewing discovery documents was not just completing a task — they were building the pattern recognition, issue-spotting instincts, and quality-control habits that made them useful to partners two years later.
Thomson Reuters Institute’s 2025 research found that 81% of large US law firms are actively experimenting with generative AI tools, and 43% have already deployed them in live client matters. The work that used to take a first-year associate 40 hours now takes an AI tool 40 minutes. The efficiency gain is real. The development gap is also real: US BigLaw firms are asking the same performance evaluation frameworks they used when first-years learned through volume repetition to evaluate first-years who are now learning through AI oversight and judgment calls. The frameworks were not built for this. Most of them are still measuring speed and output volume in a practice environment where neither of those metrics reflects first-year learning anymore.
This is the specific problem that US BigLaw performance management needs to solve in 2026: not how to train first-years on AI tools, but how to evaluate and develop the human capabilities that remain after AI handles the routine work — and how to do that for a Gen Z cohort that already expects AI as part of their working environment and will leave firms that treat it as a threat rather than an infrastructure upgrade.
The Development Gap AI Has Created at US BigLaw
Understanding the performance management problem requires understanding what first-year development at US BigLaw used to look like and what it looks like now.
The evaluation mismatch: Most US BigLaw performance evaluation frameworks were designed for the left column. They measure output volume, hours billed on specific task types, and technical accuracy in routine work. None of these metrics meaningfully evaluate the capabilities in the right column: AI output assessment, margin judgment calls, error pattern recognition, and context-sensitive override decisions. The first-year associate who is excellent at these new capabilities will score mediocrely in a framework still measuring the old ones.
What Gen Z Associates Bring to This Environment and What They Need From It
Gen Z associates at US BigLaw entered the profession expecting AI as part of their working environment. Bloomberg Law’s 2025 Future of Work Survey found that 61% of associates expect AI to handle at least a quarter of their routine work within two years. For Gen Z first-years, this is not an adjustment — it is the baseline expectation. The challenge is not that they resist AI adoption; it is that the development infrastructure at most US BigLaw firms has not kept pace with the speed of their AI integration.
The 70% who said they receive little or no training on how to collaborate effectively with AI tools (Bloomberg Law, 2025) are not describing a technology problem. They are describing a performance management problem: the firm has not defined what good AI collaboration looks like, has not built evaluation criteria that measure it, and has not created the feedback loops that would help a first-year understand whether their AI oversight judgments are developing appropriately. The absence of this infrastructure is experienced by Gen Z associates as the same absence they experience across all feedback dimensions: the firm is not paying attention to their development.
The Gen Z compounding factor: The 60% of US law firm associates who say their firm is not actively trying to retain them (MLA, 2024) are disproportionately the first-years and second-years navigating AI-augmented practice without updated evaluation frameworks. These associates have no way to assess whether they are developing the right capabilities because the firm has not defined what the right capabilities are in a post-AI first-year environment. The departure decision forms in that uncertainty.
What US BigLaw Entry-Level Performance Evaluation Needs to Measure in 2026
Redesigning entry-level performance evaluation for AI-augmented BigLaw practice requires identifying the specific capabilities that separate strong from weak first-year performance in a practice environment where AI handles routine volume. These are not speculative future competencies — they are the capabilities that distinguishes high-performing from average-performing first-years at US BigLaw firms today:
The Judgment Gap Is the Performance Management Gap
What all six capabilities above share is that they require judgment rather than speed or volume. Judgment — the ability to assess quality, apply context, recognise error patterns, and escalate appropriately — is precisely the capability that traditional first-year development built through volume repetition, and precisely the capability that most US BigLaw performance evaluation frameworks do not currently measure directly. The performance management adaptation that AI-augmented practice requires is not adding new metrics to existing frameworks. It is replacing volume and output metrics with judgment and assessment metrics — which requires redesigning the competency rubrics that the evaluation system uses to rate first-year performance.
💡 Key Insight: The US BigLaw firms that are managing the AI transition in first-year development most effectively are not the ones with the most sophisticated AI tools. They are the ones that have updated their competency rubrics to measure judgment capabilities explicitly, built matter-completion feedback protocols that capture specific observations about AI oversight quality, and created upward review instruments where first-years can tell partners which aspects of AI collaboration they need more guidance on. The tools are available everywhere. The evaluation infrastructure is not.
SRA designs and administers performance evaluation programs for US law firms navigating AI-driven practice change.
Updated competency frameworks, upward reviews, 360-degree feedback, and firm engagement surveys — fully managed for United States law firms since 1987. No software to deploy.
360-Degree Feedback → srahq.com/services#360 | Upward Reviews → srahq.com/services#upward
Contact SRA → srahq.com/contact
How US BigLaw Firms Are Adapting Performance Management for AI-Augmented Practice
The adaptation is happening along three practical lines at the US BigLaw firms that are furthest ahead on this problem:
1. Updating Competency Rubrics to Include AI-Specific Dimensions
The highest-impact change is revising the first-year and second-year competency frameworks to explicitly include AI-related judgment capabilities alongside the traditional dimensions. This does not mean adding a separate ‘AI skills’ section. It means revising the definitions of existing competency dimensions to reflect how they present in an AI-augmented practice environment. ‘Quality of legal research’ in a post-AI environment includes assessment of AI-generated research output, not just direct research capability. ‘Document review accuracy’ includes exception identification in AI-processed review sets, not just manual review speed. — The rubric update is conceptually straightforward and requires a practitioner with law-firm-specific expertise to execute well, because the right update varies by practice area and the specific AI tools the firm uses.
2. Adding Matter-Completion Feedback on AI Collaboration Quality
Matter-completion feedback — structured partner observations within 48 hours of a significant matter milestone — is the delivery mechanism for the updated competency framework. When partners give matter-completion feedback on a first-year’s AI oversight work, they need specific questions to anchor the observation: did the associate catch the AI-generated error in the risk analysis? Did they apply client context appropriately to the AI clause suggestions? Did they escalate the right issues rather than self-resolving everything? These specific questions produce the developmental observations that ‘needs to improve research quality’ cannot. The matter-completion feedback format is the same across AI and non-AI work; only the specific observation prompts need updating.
3. Using Upward Reviews to Surface AI Development Needs
The upward feedback direction is also relevant to AI development at US BigLaw: first-year associates are in the best position to identify which aspects of AI-augmented practice they are navigating without adequate guidance from supervising partners. Partners who have not updated their mental model of first-year development may be providing feedback calibrated to traditional performance expectations while their associates are being evaluated on AI oversight quality they have never been explicitly taught. SRA’s upward review program can include dimensions specifically designed to surface AI development guidance gaps: whether partners are providing useful feedback on AI collaboration quality, whether the first-year understands what good AI oversight looks like from the partner’s perspective, and whether the escalation criteria for AI output issues have been made explicit. This turns the upward review from an exclusive measure of supervision quality into a development intelligence instrument for the AI transition.
The Retention Connection
The AI development gap at US BigLaw has a direct retention consequence that is visible in the data. Gen Z associates at American firms who cannot assess whether they are developing the right capabilities — because the firm has not defined them, measured them, or provided feedback on them — experience the same absence that produces the 60% ‘firm not trying to retain me’ perception (MLA, 2024). The specific mechanism: Gen Z associates who joined BigLaw expecting AI as part of their working environment and who are not receiving structured development guidance on how to be excellent at AI-augmented legal work are concluding that the firm’s development infrastructure is behind their professional reality. Bloomberg Law’s 22% YoY increase in departures from AmLaw 100 firms to smaller culture-focused practices (2024) includes a meaningful component of associates who found that smaller firms were building AI-ready practice environments faster than their BigLaw employers.
The practical implication for US BigLaw PD Directors: The AI development gap is not primarily an IT problem or a training curriculum problem. It is a performance management problem. US BigLaw firms that update their competency rubrics, add matter-completion feedback on AI collaboration quality, and use upward reviews to surface AI development guidance gaps are addressing all three dimensions simultaneously. Firms that treat AI development as a technology adoption issue rather than a performance management issue will continue to lose Gen Z associates who perceive their development as unmanaged in exactly the environment they came to BigLaw to develop in.
Frequently Asked Questions: AI, Gen Z, and US BigLaw Performance Management
1. How is AI changing entry-level work at US BigLaw firms in 2026?
AI tools at US BigLaw firms — Harvey, Casetext, Litera’s AI suite, and firm-specific deployments — have automated the majority of the document review, contract analysis, research summarisation, and discovery organisation that traditionally occupied first-year associate time. Thomson Reuters Institute’s 2025 research found 81% of large US law firms are actively experimenting with generative AI tools, and 43% have deployed them in live client matters. The practical effect on first-year work is that volume-based tasks that previously took 40 hours now take AI tools 40 minutes. First-years are increasingly assigned to review AI output, flag errors and gaps, apply client context to AI-generated analysis, and make judgment calls at the margin of AI capability. This is a fundamentally different set of capabilities from what traditional first-year development built through high-volume repetition, and most US BigLaw performance evaluation frameworks have not been updated to reflect the shift.
2. Why does AI automation create a performance management problem at US BigLaw?
AI automation creates a performance management problem at US BigLaw because the volume-based work that AI has eliminated was also the primary mechanism through which first-years developed the pattern recognition, issue-spotting instincts, and quality-control habits that made them useful to partners by year two or three. When that work moves to AI, first-years are no longer building those capabilities through repetition — they are building them through AI oversight, judgment calls, and exception identification. But most US BigLaw evaluation frameworks are still measuring the capabilities built through volume repetition: research accuracy, document review speed, output quantity. A first-year who is excellent at AI oversight and margin judgment but produces fewer hours of raw output in traditional task categories will underperform in a framework still calibrated to the traditional path. The evaluation system needs to be updated to measure the capabilities that matter in AI-augmented practice, or it will systematically misevaluate the best-adapted first-years.
3. What competencies should US BigLaw evaluate in AI-augmented first-year associates?
The six capabilities that most directly predict first-year performance in AI-augmented BigLaw practice are: AI output assessment accuracy (identifying errors and gaps in AI-generated analysis before partner submission), context-sensitive judgment (applying firm-specific risk tolerance to AI suggestions), productive AI prompting (structuring queries that produce precise and useful outputs), error pattern recognition (identifying when AI is systematically wrong about a specific issue type), critical independent verification (knowing when to check AI citations against primary sources), and structured escalation (recognising which AI output issues need partner input). These are not speculative future skills — they are the capabilities that distinguish high-performing from average-performing first-years at US BigLaw firms today, in practices where AI tools are already deployed in live client matters. The performance management adaptation is to build these capabilities into the competency rubrics and matter-completion feedback protocols that evaluate first-year performance.
4. How should US BigLaw firms adapt upward reviews for AI-augmented practice?
Upward reviews at US BigLaw firms navigating AI-augmented practice should include dimensions specifically designed to surface AI development guidance gaps from the associate’s perspective. Associates are in the best position to identify which aspects of AI collaboration they are navigating without adequate partner guidance — whether partners are providing useful feedback on AI oversight quality, whether escalation criteria for AI output issues have been made explicit, and whether the associate understands what excellent AI-augmented work looks like from the partner’s perspective. SRA’s upward review program can include AI collaboration guidance quality as a rated dimension, with individual partner reports showing each partner’s scores against firm averages. This turns the upward review into a development intelligence instrument for the AI transition: identifying which partners have updated their mental model of first-year development and which are providing feedback calibrated to traditional performance expectations in an AI-augmented environment.
5. How does the AI transition at US BigLaw connect to Gen Z retention?
The connection is through the development infrastructure gap. Gen Z associates at US BigLaw entered the profession expecting AI as part of their working environment — Bloomberg Law’s 2025 survey found 61% expect AI to handle a quarter of their routine work within two years. When those associates join firms that have deployed AI tools but not updated their performance evaluation frameworks to reflect AI-augmented practice, they experience the same development infrastructure gap that drives attrition generally: the firm is not defining what excellent performance looks like in their actual working environment, is not measuring their performance against those criteria, and is not providing feedback that would help them develop the capabilities they need. The 60% of US law firm associates who feel their firm is not actively trying to retain them (MLA, 2024) include a disproportionate share of Gen Z associates in AI-augmented practice environments where the development framework has not kept pace with the work. Updating the performance management infrastructure to reflect AI-augmented practice is a retention strategy as well as a development strategy.
Sources
- Thomson Reuters Institute, “Generative AI in Law Firms,” 2025 — 81% experimentation, 43% live deployment
- Bloomberg Law, “Future of Work Survey,” 2025 — associate AI expectations and training gaps
- Bloomberg Law, “Why Gen Z and Millennial Lawyers Are Leaving BigLaw for Small Firms,” 2024 — 22% YoY departure increase
- Thomson Reuters, “Legal Talent and Career Development Report,” 2024 — feedback frequency and retention
- Major, Lindsey & Africa (MLA), Associate Survey on Retention, 2024
- BigHand, “Law Firm Leaders Survey,” 800+ US law firm respondents, 2025
Related Reading
- How US BigLaw Firms Can Retain Gen Z Associates in 2026 — What the Data Shows
- What Gen Z Associates at US Law Firms Actually Want in 2026 (With Data)
- Why US Law Firm Leaders Need Upward Reviews in 2026 — The Data Case
- 360-Degree Feedback for US Law Firms: What It Is, How It Works, and When to Use It
- Late Feedback at US Law Firms: The Hidden Cost and How to Fix It in 2026
Is your US BigLaw firm’s performance evaluation framework measuring what first-years actually do in 2026 — or what they did in 2019?
SRA designs and administers performance evaluation programs for United States law firms navigating AI-driven practice change. Updated competency frameworks, upward reviews, 360-degree feedback, and engagement surveys — fully managed. No software to deploy.
360-Degree Feedback → srahq.com/services#360 | Upward Reviews → srahq.com/services#upward
Firm Engagement Survey → srahq.com/services#firm | Contact SRA → srahq.com/contact
Exclusively serving United States law firms since 1987.


