Legal
GPT-4 passed the bar exam at the 90th percentile. Harvey AI has raised over $100M. CoCounsel is now Thomson Reuters. The legal AI market is not experimental anymore — it's consolidating, and the firms that locked in infrastructure early are pulling ahead on turnaround time and margin. The open question isn't whether AI belongs in legal workflows. It's whether your firm controls the implementation or inherits someone else's architecture decisions.
Legal is the industry where AI hallucination carries the harshest professional consequences. The Mata v. Avianca sanctions proved that AI-fabricated citations in federal court filings are not a theoretical risk — they have happened, attorneys were sanctioned, and it made the news. This is not an argument against AI in legal. It is an argument for engineering it with citation grounding and human review checkpoints that match the severity of the failure mode.
What AI Is Actually Changing
The near-term transformation is in research and document-intensive work. Harvey AI, CoCounsel, and the LexisNexis AI stack can surface relevant cases, identify conflicts, and draft research memos at a speed no human researcher can match — when grounded in real legal databases. Document review for eDiscovery can be triaged by AI in a fraction of the time required for manual review. Contract review and due diligence are following the same pattern in transactional practice, with Spellbook and Ironclad driving adoption in mid-market firms.
What is not changing quickly is the work that requires judgment, strategy, and client relationships. Depositions, trials, negotiations, and complex regulatory strategy are human work. The AI is handling the substrate — the research, the document analysis, the drafting of routine instruments — so that lawyers can spend more time on the work that actually requires a lawyer.
The Confidentiality Architecture Problem
Rule 1.6 creates data architecture requirements that most SaaS legal tools do not satisfy. Client confidentiality is not just a policy requirement — it is a legal doctrine with privilege implications. If client data from Matter A is accessible when processing Matter B, that is not just a data governance problem; it is potential privilege waiver exposure that can affect the client's legal position.
- Client data must be isolated at the storage layer — application-level access controls are not sufficient
- AI inference must not leak information across matter boundaries — embeddings and vector indices need per-matter isolation
- Audit logs must capture every AI interaction with client data — who, what, when, for which matter
- Third-party AI tool vendors require informed client consent under most bar interpretations — the consent workflow must be built into the onboarding process
The Supervision Engineering Problem
Rule 5.3 requires lawyers to supervise the work of non-lawyer assistants. Courts and bar associations are extending this duty to AI systems. The ABA's Formal Opinion 512 is explicit: AI outputs in client-facing work require substantive review. "Defensible" means the review is logged, the reviewer is identified, and the review was substantive rather than rubber-stamp.
Designing Compliant Legal AI Workflows
All AI research outputs must be linked to verifiable primary sources in real legal databases. RAG pipelines with Westlaw or LexisNexis APIs, not general-purpose web search.
Surface uncertainty explicitly. If the model is less than confident about a legal conclusion, the UI must make that visible to the reviewing attorney — not hide it.
Build structured review stages into the workflow. AI drafts the memo; the attorney reviews and approves before it goes anywhere. The approval is logged with the attorney identity and timestamp.
Every AI interaction with client data writes to an append-only log. This is not optional — it is the audit evidence for Rule 5.3 supervision documentation.
- 01
Rule 1.6 confidentiality requires client data isolation at the data layer — most multi-tenant SaaS legal tools commingle data across matters by design, creating privilege waiver exposure that general counsel and outside clients are increasingly flagging in vendor assessments
- 02
Rule 5.3 supervision duty cannot be delegated to software — every AI output touching a legal work product needs a defensible, documented human review stage, which means AI tools that skip this create compliance exposure, not just quality risk
- 03
Citation hallucination is a documented production failure, not a theoretical risk — the Mata v. Avianca sanctions established that submitting AI-fabricated case citations has personal consequences for attorneys, which means RAG pipelines grounded against Westlaw or LexisNexis are now a minimum bar for legal research tooling
- 04
The billable hour creates structural resistance to AI adoption — firms where associates track six-minute increments are actively disincentivized to deploy tools that compress 30 hours of research into 30 minutes, so technology adoption decisions depend on billing model decisions that most firms haven't made yet
- 05
Bar association AI ethics opinions are jurisdiction-specific and still evolving — a multi-state firm running the same AI tool across practice areas may be compliant in one jurisdiction and in violation in another after the next state bar formal opinion drops
- 06
Privilege review in large document sets requires accuracy that general-purpose LLMs don't consistently achieve — false negatives (privileged documents included in production) are catastrophic, and fine-tuned models built specifically for eDiscovery exist for exactly this reason
We treat Rule 1.6 as an architecture requirement, not a policy checkbox — client data isolation is enforced at the database and API layer, with per-matter tenant boundaries that hold up under vendor security assessments, not just terms of service
Our legal AI implementations include citation verification against real legal databases (Westlaw, LexisNexis) with confidence scoring and structured review queues — we don't ship legal research tools that output raw LLM text as a work product
We build against the platforms law firms actually use: Clio for practice management, Relativity for eDiscovery, iManage for document management — integration work, not workflow replacement, so adoption doesn't require retraining a practice that's been running the same system for a decade
We scope billing model implications before writing a line of code — a legal AI tool deployed in a firm that hasn't updated its pricing model creates internal political resistance that kills adoption regardless of how good the technology is
We monitor ABA formal opinions and state bar guidance as part of ongoing delivery — not a one-time compliance check at launch, because what's permissible today may not be permissible after the next formal opinion
01Hallucination Is the Only Bug That Ends Careers
Mata v. Avianca is the reference case: attorneys submitted a brief citing six cases that didn't exist, all generated by ChatGPT. The court sanctioned them personally. The engineering response is citation grounding — AI research outputs linked to verifiable sources in Westlaw or LexisNexis, confidence-scored, and reviewed before anything goes into a filing. RAG pipelines with legal database grounding are now table stakes for any serious legal research product. General-purpose LLMs used directly for case research are a malpractice risk, not a productivity tool.
02The Billable Hour Problem Is a Pricing Problem
Firms reporting widescale AI adoption are finding associates can't hit billable targets because work is completing faster. A client who has read about Harvey AI is asking why a research memo that took 30 associate hours last year is still billed at 30 hours. The firms moving fastest on AI adoption have shifted to flat fees, subscriptions, or value-based pricing — models that don't penalize efficiency. Legal tech builders who ignore this dynamic ship tools that have the right features and die in procurement, because the incentive structure of the firm hasn't been addressed.
03eDiscovery Is Where the Economics Are Clearest
A review set of one million documents that previously required weeks of contract reviewer time can be triaged in hours, with humans reviewing only the uncertain and flagged documents. The engineering challenge is privilege review accuracy — false positives are expensive, false negatives (privileged documents inadvertently produced) are catastrophic and can't be undone. DISCO, Relativity AI, and the purpose-built eDiscovery platforms have fine-tuned models for this task. Deploying a general-purpose LLM for privilege review without domain-specific fine-tuning will not hit the accuracy bar that production document review requires.
Harvey AI and CoCounsel (Thomson Reuters) establishing the BigLaw AI infrastructure layer — large firm adoption moving from pilot programs to standard tooling in associate workflows
Spellbook and Ironclad driving CLM automation in mid-market — routine contract review that previously required paralegal time is being handled by AI with human spot-checks
Court AI disclosure requirements expanding across federal circuits and state courts — mandatory disclosure is becoming standard practice, which means firms need AI use logging and audit trails as a compliance requirement
eDiscovery AI moving from document triage into privilege review, with fine-tuned models purpose-built for specific practice areas outperforming general-purpose models on accuracy benchmarks
Alternative fee arrangements accelerating as AI compresses the time basis of legal work — flat fees and value billing replacing hourly pricing in transactional practices first, with litigation to follow
Non-lawyer ownership sandbox pilots expanding beyond Utah, Arizona, and Washington — structural changes to law firm ownership models that will reshape how legal AI tools are funded and deployed
- 01
Using general-purpose LLMs for legal research without citation grounding — the Mata v. Avianca sanctions are a documented outcome; outputting raw LLM research text as a work product is a malpractice exposure, not a productivity gain
- 02
Multi-tenant AI implementations that commingle client data across matters — violates Rule 1.6, creates privilege waiver risk, and is the first thing outside counsel and sophisticated clients flag in vendor security reviews
- 03
Automating legal workflows without structured human review stages — Rule 5.3 supervision duty applies to AI-generated work product, and tools that skip a defensible review step don't satisfy it regardless of how accurate the model is
- 04
Launching legal AI without jurisdiction-specific ethics monitoring in place — a tool that's compliant at launch can become non-compliant after the next ABA formal opinion or state bar guidance, and firms need to know before the next filing, not after
- 05
Deploying efficiency tools in firms that haven't updated their pricing models — internal political resistance from partners protecting billable hour revenue kills adoption faster than any technical limitation
ABA Model Rules 1.1, 1.6, 5.3, and 5.4 are the governing framework for legal AI use. Rule 1.1 (competence) now includes a duty to understand generative AI tools relevant to practice — ABA Formal Opinion 512 (2024) addresses competence, confidentiality, supervision, and billing obligations for AI-generated work product. Multiple federal circuits and state courts now require disclosure of AI assistance in filings, and the list is expanding. Washington, Utah, and Arizona have sandbox pilots allowing non-lawyer ownership of law firms, with UPL enforcement pressure increasing as AI enables non-lawyers to perform tasks that previously required attorney involvement.
We start with data architecture: per-matter isolation, no cross-client data paths, audit logs that can withstand a privilege challenge in discovery. AI features are layered on top of that foundation — RAG pipelines grounded against verified legal databases, confidence thresholds that gate outputs into human review queues before they touch any filing or client deliverable. We integrate with Clio, Relativity, and iManage rather than building parallel systems, because the goal is adding AI capability to existing workflows, not replacing the workflows attorneys have spent years building. Every engagement includes a billing model conversation, because the technology is only half the adoption problem.
Ready to build for Legal?
We bring domain expertise, not just engineering hours.
Start a ConversationFree 30-minute scoping call. No obligation.
