Ask questions like this to multiple AIs at once

Try free

What precautions should lawyers take when using AI tools for legal research?

When lawyers integrate AI into legal research, they’re not just adopting new software—they’re navigating a shifting landscape of professional responsibility where traditional ethical duties intersect with emerging technological risks. The key is recognizing that AI is a research assistant, not a substitute for legal judgment.

Here is a comprehensive framework of precautions organized by core ethical obligations and practical risks:

1. The Duty of Competence: Verify Everything

Under Model Rule 1.1 (and its state equivalents), lawyers must provide competent representation, which now includes understanding the risks and limitations of the technology they use.

Critical precautions:

  • Hallucination checks: AI models can generate convincing but completely fabricated cases, statutes, and quotes. The infamous Mata v. Avianca case (2023) resulted in sanctions when lawyers submitted ChatGPT-generated briefs citing non-existent precedents.
  • Shepardizing/KeyCiting: Always verify that cited cases are real, good law, and actually say what the AI claims they say. AI may misrepresent holdings or mix up facts from different cases.
  • Jurisdictional accuracy: Ensure the AI isn’t citing foreign law, outdated statutes, or federal standards when state law applies (or vice versa).

The rule: Treat AI output as a rough draft requiring independent verification, not a finished product.

2. Confidentiality and Data Security (Rule 1.6)

Most AI tools operate on third-party servers, creating potential waivers of attorney-client privilege or work product protection.

Key safeguards:

  • Input restrictions: Never input client names, specific dates of birth, financial figures, trade secrets, or detailed case facts into public AI tools (like ChatGPT, Claude, or Gemini) unless the vendor explicitly guarantees:
    • Zero data retention for training purposes
    • SOC 2 Type II compliance
    • Specific contractual protections against subpoena
  • Ethical walls: If using AI within a firm, ensure queries don’t train models that could surface confidential information to other users (including competitors).
  • Retainer clarity: Some jurisdictions require client consent before inputting confidential information into AI systems. Even when not required, transparency prevents malpractice claims.

3. Supervision and Responsibility (Rules 5.1 & 5.3)

Lawyers cannot delegate judgment to algorithms.

Practical requirements:

  • Review protocols: Establish firm-wide guidelines requiring senior attorneys to review AI-generated research before filing with courts or sending to opposing counsel.
  • Training: Associates and staff need training on what AI can and cannot do—particularly the difference between a case summary and legal analysis.
  • Documentation: Maintain records of what AI tools were used and how outputs were verified (useful if questions arise later about due diligence).

4. Bias and Fairness Considerations

AI models trained on historical legal data may perpetuate systemic biases in sentencing guidelines, creditor remedies, or employment discrimination standards.

Mitigation strategies:

  • Source skepticism: Question whether the training data over-represents certain jurisdictions, practice areas, or demographics.
  • Counter-research: Deliberately search for counter-authority that AI might have deprioritized in its algorithmic ranking.
  • Contextual analysis: Ensure AI isn’t masking evolving standards (e.g., changes in reasonable suspicion standards or environmental regulatory interpretations).

5. Transparency and Candor (Rules 3.3 & 4.1)

Courts increasingly require disclosure of AI use in filings.

Compliance checklist:

  • Local rules check: Federal courts (including the Northern District of Texas and others) now require certificates confirming whether AI was used and that citations were verified by humans.
  • Candor to tribunal: If asked directly by a judge about research methods, lawyers must be truthful about AI assistance.
  • Client communication: Explain to clients when AI reduces billable hours (good) versus when human expertise is essential (necessary).

6. Practical Implementation Safeguards

Risk AreaPreventive Measure
Hallucinated citationsMandate "human in the loop" verification against Westlaw/Lexis
Data leakageUse enterprise AI platforms (e.g., Harvey, CoCounsel) withBAAs, not consumer tools
Over-relianceLimit AI to "first pass" research; require traditional Shepardizing
Scope creepDefine specific, narrow queries rather than open-ended legal questions
E-DiscoveryEnsure AI privilege detection tools are trained on the specific matter’s privilege log

7. Malpractice Insurance Consideration

Some carriers now exclude AI-related errors or require specific protocols for coverage to apply. Review your policy’s "technology exclusion" clauses and notify your carrier of AI adoption.


Where would you like to go deeper? For example:

  • Are you specifically concerned about litigation research (where hallucinated citations can result in sanctions) versus transactional due diligence (where missing a critical clause poses different risks)?
  • Are you evaluating enterprise legal AI platforms (like Casetext’s CoCounsel or Harvey) versus consumer tools like ChatGPT?
  • Does your jurisdiction have specific AI disclosure rules I could unpack for you?

The regulatory landscape is evolving rapidly—staying current with your state bar’s ethics opinions (several states, including California, Florida, and New Jersey, have issued specific AI guidance) is as important as mastering the technology itself.