Chapter 8: Ethical and Regulatory Implications of AI in Law

Chapter Overview

In Chapter 7, we explored how generative AI is reshaping the legal profession by streamlining day-to-day work, adjusting law firm business models, and influencing professional roles. We discussed how automation can free attorneys from certain routine tasks, giving them more time for higher-level strategic thinking. We also examined how law firms are managing these changes, often by encouraging new skill development and adopting emerging technologies, while adhering to professional obligations.

In this chapter, we shift our focus to the ethical and regulatory frameworks governing lawyers’ use of generative AI in the United States. You will analyze how existing professional rules, such as those emphasizing competence, confidentiality, and candor, apply to AI-assisted legal practice. We will examine formal ethical opinions from the American Bar Association (ABA) and various state bars, newly minted court rules requiring lawyers to disclose their AI usage, and real-world disciplinary cases imposing sanctions for AI-related missteps. By the end of this chapter, you should have a solid understanding of how to use AI responsibly under today’s ethical codes, and you will be prepared to develop responsible AI-use policies in any legal setting.

Upon successful completion of this chapter, students should be able to:

  1. Identify key ethical obligations relevant to using generative AI in legal practice, including competence, confidentiality, candor to the court, and supervision.
  2. Explain how ethics opinions, court rules, and regulations apply to lawyers’ use of AI, enabling a clear interpretation of professional requirements.
  3. Analyze real-world disciplinary actions and case law involving the misuse of AI, learning how ethical violations occur and how to avoid them.
  4. Evaluate both risks and benefits of integrating AI into legal workflows, with special consideration for accuracy, bias, and client communication.

Let's begin with a thought experiment, but one that's not too far removed from your experience.


How Do Existing Ethical Rules Apply to AI?

Imagine you’re a new associate at a law firm, and you’re up against a tight deadline to draft a legal memo. Someone suggests using a generative AI tool like ChatGPT to produce the first draft. You type in the prompt, wait a few seconds, and get a polished write-up. It seems miraculous: you saved hours of work. But you might pause and ask: Is it ethically acceptable to use this tool? Could you accidentally violate confidentiality or produce an inaccurate document? Could you mislead a court if you submit the AI’s text without checking it?

Over the past two years, legal regulators have wrestled with these questions, publishing guidance on how lawyers can (or cannot) incorporate AI into their everyday practice. The underlying principle is that technology doesn’t change a lawyer’s core ethical duties. Whether you’re typing with pen and paper, using an online research engine, or harnessing a cutting-edge AI, you remain responsible for competent representation, confidentiality of client information, and honesty in communications.

In this chapter, we’ll walk through the primary sources of AI-focused ethical guidance:

  1. Formal ethics opinions from the American Bar Association and state bars, clarifying how existing rules apply to AI.
  2. Court orders that require lawyers to disclose when they use AI or to certify the accuracy of AI-derived information.
  3. Relevant statutes and regulations, such as new state laws broadly governing AI and any federal initiatives that might affect legal practice.

By examining real-world cases where attorneys have been sanctioned for AI misuse, we’ll see that these rules have teeth. We’ll also explore best practices, like verifying all AI-generated citations, to ensure compliance.

Our goal is not to scare you away from AI; in fact, when used appropriately, AI can offer substantial benefits, such as faster research and automated drafting. But it must be wielded with a keen awareness of ethical obligations.


ABA Guidance on Generative AI

Duty of Competence and “Tech Competence”

Since 2012, the ABA’s Model Rule 1.1 (Competence) has included Comment 8, stating that lawyers must keep abreast of “the benefits and risks associated with relevant technology” (ABA Model Rules, 2012). While it never specifically mentions “AI,” this language has been interpreted to require a basic understanding of any technology a lawyer uses in practice.

ABA Formal Opinion 512 (July 2024), titled "Generative Artificial Intelligence Tools," elaborates that competence involves understanding an AI tool’s capabilities and limits (ABA Formal Opinion 512, 2024). This does not mean you must become a data scientist. Rather, you must use the tool intelligently, recognizing, for example, that generative AI can sometimes produce entirely fabricated statements (the so-called “hallucinations”). If you can’t properly evaluate or supervise the AI’s output, it’s safer not to use it, or you must seek additional training.

Key Term
Technological Competence: Refers to a lawyer’s duty under Model Rule 1.1, Comment 8 to stay updated on how technology (including AI) affects legal practice. This includes knowing enough to understand a given tool’s risks (like “hallucinations”) and benefits (like faster research).

Confidentiality and AI

Model Rule 1.6 obligates lawyers to safeguard client confidences. According to ABA Formal Opinion 512, if an AI platform stores or uses any input data to further train its models, that could expose client secrets (ABA Formal Opinion 512, 2024). Before you type confidential information into a public AI tool, you must (1) carefully review the platform’s privacy policy to see if data is retained or visible to others, and (2) obtain client consent if there’s any risk of disclosure.

In practical terms, many law firms now direct attorneys to “anonymize or redact” client details before using AI. Some enterprise AI solutions promise that data remains private and will not be used for any training. Even then, you remain responsible for taking reasonable steps to protect the client’s information.

Practice Pointer
If you must input client-related text into an AI tool, strip out identifying details first. For instance, replace the client name and specifics with placeholders. Then incorporate the tool’s suggestions manually into your final document, ensuring no confidential data travels outside your secure environment.

Under Model Rules 1.4 and 1.0, attorneys generally should keep clients informed about any significant aspect of the representation. If using generative AI is a core part of your strategy, particularly if you plan to share confidential data or rely on the AI’s results, it may be wise or even mandatory (in some states) to obtain the client’s informed consent.

The ABA stops short of requiring universal disclosure each time you use AI for routine tasks (like basic proofreading). However, if the AI use could materially affect your client’s case or reveal sensitive info, it is prudent to let them know. Several state bar opinions (such as in Florida, Pennsylvania, and West Virginia) explicitly encourage or require more robust client disclosure.

Supervision of AI Tools

Model Rule 5.1 requires supervising attorneys to ensure that lawyers under them comply with ethical rules, and Model Rule 5.3 extends this duty to nonlawyer assistants. When it comes to AI, the ABA says to treat these tools as you would a human assistant, meaning you cannot delegate ultimate responsibility. If your firm’s paralegal or junior associate uses an AI tool to draft a brief, you must confirm that all content is accurate and that no client confidences were improperly disclosed.

Example
Scenario: A senior partner instructs a junior associate to “use ChatGPT” to draft a motion. The associate does so but fails to fact-check the resulting citations, which turn out to be bogus. The senior partner is also on the hook for failing to supervise properly and not reviewing the AI-generated content.

Candor and Truthfulness

Lawyers must always be honest with tribunals (Rule 3.3) and with others (Rule 4.1). ABA Formal Opinion 512 emphasizes that attorneys remain fully responsible if an AI tool outputs false statements. There is no “it was the AI’s fault” defense. If you cite a case or factual claim generated by AI, you must confirm it actually exists and is accurate.

The surge of “hallucinated” cases in real court filings (e.g., Mata v. Avianca) underscores how critical verification is. Submitting an AI-created brief with fake citations violates your duty of candor and can lead to severe sanctions.

Billing and Reasonable Fees

Model Rule 1.5 requires legal fees to be “reasonable.” If AI dramatically reduces the time needed to complete a task, you cannot bill clients for the hours you would have spent without AI. At the same time, you can bill for the value of your services, expert oversight, strategic guidance, or editing. The ABA notes you may also need to disclose to clients any cost associated with using a paid AI platform. Transparency helps avoid disputes and protects you from allegations of overbilling.


State Bar Ethics Opinions

Although the ABA opinions set a framework, each state bar can and does refine these guidelines. Let’s highlight some of the most influential or detailed state bar opinions on AI.

Florida

Florida Bar Advisory Ethics Op. 24-1 (Jan 2024) explicitly permits generative AI usage, but only if the lawyer can comply with the duty of confidentiality, verify all citations, and ensure the client is not overcharged. If using a third-party AI tool might risk disclosing sensitive info, Florida lawyers must obtain the client’s informed consent. Florida also warns that “AI cannot be allowed to make final decisions.” The attorney must review the AI’s output and incorporate legal judgment before filing or finalizing documents.

District of Columbia

D.C. Bar Ethics Op. 388 (Apr 2024) focuses on the lawyer’s responsibility to understand AI’s limitations. For example, an attorney who does not know that AI can hallucinate is already failing the duty of technological competence. D.C. also suggests lawyers consider saving AI prompts and outputs in the client file, like you would with any legal research, to document how the final work product was formed.

Pennsylvania–Philadelphia

Joint Formal Opinion 2024-200 from the Pennsylvania Bar and Philadelphia Bar is noteworthy for requiring explicit verification of every citation AI suggests. Because of real-world examples of nonexistent case citations, Pennsylvania lawyers are told they must pull and read each cited case. The opinion emphasizes that if a client’s confidential information might be input into an AI tool with uncertain data practices, the lawyer must proceed only with informed consent or ensure the data is properly protected.

Kentucky

Kentucky Bar Ass’n Ethics Op. KBA E-457 (Mar 2024) underscores that lawyers have a duty to stay educated on how AI might affect their practice. If AI drastically reduces the time needed for a particular task, the attorney must adjust the fee to remain reasonable. Kentucky also suggests that minor AI usage (like using a grammar-check tool) does not require client notification, but more substantial involvement might.

Practice Pointer
If you are considering a new AI tool at your firm, create a short “due diligence” checklist. Evaluate:

  1. Data Security: Does the tool store or share inputs publicly?
  2. Accuracy: Are there disclaimers about “hallucinations”?
  3. Cost: How will fees be passed on to clients, if at all?
  4. User Training: Have lawyers and staff received guidance on verifying AI output?

Texas

Ethics Op. 705 (Feb 2025) from the Texas Center for Legal Ethics affirms that generative AI is within the scope of “technological competence” under Texas Rule 1.01. Lawyers can use AI but must do so in a manner consistent with confidentiality and candor. Similar to other states, Texas explicitly warns that “the attorney’s ultimate responsibility for the final work product remains undiminished” (Texas Center for Legal Ethics, Opinion 705).

Other States

Across these states, the pattern is clear: AI is not banned, but lawyers must be careful, verifying output and protecting client data.


Court Rules and Judicial Guidance

While bar associations regulate attorney conduct, courts have also begun to impose their own directives. This section reviews some groundbreaking court orders that require attorneys to disclose or certify their use of AI in litigation.

Federal Judges’ Standing Orders

  1. Judge Brantley Starr (N.D. Texas) – In May 2023, Judge Starr’s standing order set the tone. Any filing in his court must include a certificate stating either (a) no AI was used, or (b) if AI was used, a human has thoroughly checked every citation and fact (N.D. Texas Standing Order, 2023). Noncompliance can get the filing struck from the record.
  2. Judge Gabriel Fuentes (N.D. Illinois) – Requires that parties disclose if they used a generative AI tool to prepare any part of a filing, specifying the tool (e.g., ChatGPT). This is purely about transparency.
  3. Judge Stephen Vaden (U.S. Court of Int’l Trade) – Goes further by insisting lawyers identify which exact portions of a filing were produced by AI and certify that no confidential information was compromised.
  4. Judge Michael Baylson (E.D. Pennsylvania) – Implements an even broader rule: attorneys must disclose all AI use, including older tools used for e-discovery or research. This covers more than just ChatGPT-type platforms.
  5. Judge Peter Kang (N.D. California) – Similar disclosure requirement but explicitly excluding standard software like word processors and typical legal research databases.

Example
Scenario: You’re in federal court in N.D. Texas. You decide to have ChatGPT draft a summary of the facts for your motion. According to Judge Starr’s rule, you now must attach a certification that you verified the entire text thoroughly. If you fail to do so, your filing might be rejected.

State Courts

While no state supreme court has adopted a statewide AI rule, some state trial judges have begun issuing individual orders mirroring the federal approach. It is essential to check local court websites or standing orders before filing. This rapid judicial response was largely triggered by the widely publicized AI-generated “fake citations” fiascos.


State and Federal Laws & Regulations

Beyond ethics codes and court orders, legislatures at the state and federal levels are grappling with how to regulate AI. Although most of these laws apply broadly to AI in various industries, they can indirectly impact lawyers.

State Legislation

Federal Initiatives

Currently, no comprehensive federal law specifically regulating attorneys’ use of AI exists. However:

Lawyers ignoring technological developments and their implications might be deemed “technologically incompetent” in future ethics or malpractice disputes.


Case Law and Enforcement Actions

The past two years have seen a handful of disciplinary actions that show how seriously courts and bar authorities treat AI-related misconduct. Let’s look at three key cases.

Mata v. Avianca (S.D.N.Y.)

In early 2023, a lawyer filed a brief citing six supposed precedents that did not exist: ChatGPT had fabricated them. When opposing counsel couldn’t locate these cases, the court (Judge P. Kevin Castel) discovered they were “hallucinated” by ChatGPT. The lawyer admitted he was unaware AI could generate fake citations. The court sanctioned him with a fine and required letters of apology to the judges named in the fake citations. This was the first high-profile “ChatGPT meltdown,” sparking many judges’ standing orders on AI disclosure.

Callout: Lesson from *Mata v. Avianca*
Never assume AI output is correct or real. Double-check each citation in a reliable database. Failing to do so can result in sanctions and career-damaging headlines.

Park v. Kim (2d Cir.)

Just months later, another lawyer faced discipline for citing a fictitious case generated by ChatGPT. The Second Circuit discovered the citation didn’t exist and referred the attorney for a disciplinary investigation. The court emphasized that no new rule is needed to explain you must verify your filings, “every attorney should already know this.” This underscores how reliant some lawyers had become on AI without verifying results.

People v. Crabill (Colorado Disciplinary Court)

In Colorado, an attorney received a one-year suspension (with a portion stayed) for filing a motion containing AI-fabricated law. He eventually realized the error but failed to promptly notify the court or correct the record, violating rules on candor. This case reveals that bar disciplinary bodies (not just courts) are prepared to impose sanctions.


Mitigating Bias in Generative AI

Technological innovation offers many benefits to the legal profession, but it also carries inherent risks: one of the most significant being bias within AI systems. Bias can arise at multiple stages of AI development and deployment, leading to unfair or skewed outcomes. In a field like law, which impacts people’s fundamental rights and opportunities, such bias can have serious ethical and legal consequences. Understanding why AI systems exhibit bias and how to mitigate it is crucial to ensuring these tools remain not only effective but also just.

Examples of Bias in Law

In legal contexts, AI-powered tools, particularly those used for predictive policing, provide a stark illustration of how bias can manifest. Many municipalities use algorithms to forecast where crime is most likely to occur, basing these predictions on historical arrest and incident data. However, if earlier policing practices were influenced by racial profiling or disproportionate targeting of certain neighborhoods, then the AI tool essentially learns and perpetuates those patterns. Instead of offering a fair, evidence-based assessment, the algorithm may repeatedly direct law enforcement toward already over-policed communities, reinforcing a cycle of inequality.

Sources of Bias

Bias in AI typically originates from one (or more) of three main sources: training data bias, algorithmic bias, and cognitive bias. Understanding these sources helps attorneys and developers identify where and how to intervene.

  1. Training Data Bias

    • Data Sampling Imbalances: If the AI’s training data over-represents some groups while under-representing others, the model develops an uneven perspective on real-world situations. For instance, if a legal AI is only fed case data from large urban areas, it might fail to capture nuances from rural jurisdictions.
    • Data Labeling Errors: Human annotators, tasked with labeling or categorizing data, can make mistakes or hold prejudices. These errors become “facts” embedded in the training set, and the resulting model inherits these flawed views.
  2. Algorithmic Bias

    • Flawed Training Data → Biased Algorithms: Even with the best intentions, if your dataset carries historical or societal biases, the AI’s outputs will replicate them.
    • Programming Errors: Developers can inadvertently incorporate assumptions or thresholds that disadvantage certain demographic groups. For example, a model might rely on income-related or vocabulary-based indicators, which correlate more strongly with specific racial or socio-economic groups.
  3. Cognitive Bias

    • Human Bias in Decision-Making: The individuals who select and weight the training data can pass along their unconscious beliefs and judgments, often referred to as “automation bias.”
    • Overlooked Sources of Bias: Because AI can be complex, data scientists or lawyers might not spot the subtle ways prejudice creeps into a dataset or an algorithm’s logic.

Learning from “Coded Bias”

Researcher Joy Buolamwini from MIT, featured in the documentary Coded Bias, discovered that many facial recognition systems performed poorly on women and people of color due to a lack of diverse training images. Although this example often focuses on facial recognition, the broader lesson applies to any AI: if the initial dataset fails to reflect the diversity of real-world conditions, the AI will struggle to produce fair and accurate results. For legal professionals, Buolamwini’s findings underscore the importance of scrutinizing the data behind AI platforms and advocating for inclusive development practices.

For attorneys, mitigating bias in generative AI means knowing the data that informs your tools and asking the right questions about how that data was compiled. Courts are increasingly aware of algorithmic biases and may challenge or dismiss evidence that comes from tools deemed unreliable or discriminatory. Furthermore, bar associations emphasize that technological competence includes understanding the risk of biased outcomes and taking reasonable steps to address them, through transparent data practices, routine audits, and ongoing collaboration with technologists and ethicists. By approaching AI with both enthusiasm and caution, legal professionals can harness its potential while safeguarding the fairness and integrity of our justice system.


Practical Implications and Best Practices

So what does this mean for you? Here are the top takeaways and strategies for ethically integrating AI into your legal work.

Verification is Mandatory

“Trust, but verify” is the mantra. When AI suggests a case, a statute, or a summary of facts, you must confirm it in recognized sources. Some attorneys now designate a second person, like a paralegal, to do a final citation check whenever AI is used to generate legal analysis.

Practice Pointer
Always keep a secure path back to original sources. If ChatGPT references “Smith v. Jones, 457 U.S. 300,” immediately pull that case on Westlaw or Lexis. Confirm it exists, confirm the quotes, and confirm the holding is correctly described.

Confidentiality Safeguards

Never input confidential data into an AI tool without first checking the platform’s data usage policy. If the policy is unclear or if you suspect your inputs may be used to train the model, consult the client or anonymize the details.

Consider using a subscription-based or enterprise AI solution that guarantees data privacy rather than free consumer versions.

Not every minor usage of AI requires telling your client, but if the use is significant, especially if it could reveal sensitive details or substantially affect the representation, best practice is to talk with your client and obtain informed consent. Some state bar opinions explicitly require this for certain AI uses.

Maintain Human Oversight

Remember that no matter how sophisticated AI becomes, it cannot replace your professional judgment. The final say on strategy, arguments, and the interpretation of law must come from a licensed attorney. Think of AI as an “assistant” that needs close supervision.

Example
Scenario: You represent a client in a contract dispute. You let AI draft the entire motion to dismiss. If you just copy-paste it without thorough review, you violate your duty to supervise. If the motion contains a misstatement of law, you own that error.

Supervising Others Who Use AI

Under Model Rules 5.1 and 5.3, partners and supervising attorneys must ensure their associates and staff follow ethical guidelines. This may require drafting a firm-wide policy on AI usage, offering training, and monitoring compliance.

Comply with Court Requirements

Check local rules and judge-specific standing orders. If they require an AI disclosure or a certificate of verification, follow it precisely. Missing these steps can lead to your filing being rejected or other sanctions.

Billing Ethics

If AI saves you significant time, don’t bill as if you spent the old, longer hours. This also means you should be transparent with clients about how the technology benefits them. Some firms are moving to flat fees for tasks heavily assisted by AI to avoid complications with hourly billing.

Callout: Example of Ethical Billing
If you previously spent 10 hours on a memo but now can do a better first draft in 3 hours (with AI’s help), you shouldn’t charge 10 hours. Instead, you could bill 3 hours plus a modest AI “platform cost” or incorporate a flat rate that reflects the actual value to the client.

Staying Informed

AI technology evolves quickly, as do the rules governing its use. Regularly check for updates from your state bar, local courts, and the ABA. Many firms circulate internal memos on new AI developments to ensure compliance.

Documentation of AI Use

Some bar opinions recommend saving your AI prompts and outputs in the case file, especially for critical tasks. This “paper trail” can demonstrate you acted diligently. Of course, if the prompts reveal strategy or client details, secure them as part of your confidential work product.

Leveraging AI Effectively

Finally, don’t forget the benefits. AI can help you quickly generate outlines, summarize discovery documents, or propose contract language. Used wisely, it can enhance efficiency and reduce repetitive tasks, freeing you up for higher-level tasks that require human nuance.

Practice Pointer
Consider training AI on your internal knowledge base (like anonymized sample briefs, motions, or memos). This can yield more tailored results while controlling confidentiality risks, if your firm’s IT department sets it up securely.


Chapter Recap

In this chapter, we explored the ethical and regulatory implications of using generative AI in legal practice. Major takeaways include:

By weaving these insights into your practice, you can reap AI’s benefits while navigating the ethical pitfalls. AI can be a powerful tool, provided you stay alert, well-informed, and accountable.


Final Thoughts

The deeper we dive into the ethical and regulatory frameworks around generative AI, the clearer it becomes that each new development, be it a groundbreaking court order or a cautionary case, ultimately stands for the same principle: technology might change how we practice law, but not our responsibility to uphold professional standards. If there’s one message that resonates through every ethics opinion and judicial directive, it’s that lawyers themselves remain the gatekeepers of accuracy, candor, and confidentiality. AI can draft, summarize, and innovate, but it’s still the human attorney who must stand behind every word.

I find this balance between human judgment and artificial intelligence both exciting and reassuring. Exciting, because it signals a future where technology can reduce drudgery and free us to focus on more nuanced, strategic, and client-centered work. Reassuring, because the heart of legal practice, our commitment to truth, justice, and advocacy, remains firmly in our hands. AI cannot supplant the empathy, creativity, and moral compass that define truly effective lawyering; it’s there to support us, not replace us.

From these insights, I see an immense opportunity for our profession. If we integrate AI responsibly, verifying information, securing client data, and staying transparent about how we use these tools, we can usher in a more efficient and perhaps even more just legal system. But that promise hinges on the choices we make today: whether we choose to learn AI’s capabilities, to anticipate its pitfalls, and to uphold the standards our clients and the courts expect. By embracing this technology with both enthusiasm and caution, we can shape the future of legal practice rather than be shaped by it.


What's Next?

In Chapter 9, we will explore how generative AI might help address Access to Justice issues and expand pro bono opportunities. We will discuss potential ways AI can automate routine tasks for legal aid organizations, the ethical considerations of providing AI-driven legal services to underserved communities, and the creative ways that technology might help close the justice gap. Keep the lessons of this chapter in mind as you consider the risks and rewards of using AI to expand legal services to broader populations.


References

American Bar Association (2024). ABA Formal Opinion 512: Generative Artificial Intelligence Tools. ABA Standing Committee on Ethics and Professional Responsibility.

American Bar Association (2012). Model Rules of Professional Conduct. Rule 1.1, Comment 8 (Tech Competence).

California Lawyers Association (2024). Is California Leading the Way on AI or Just Causing Chaos?

‘Coded Bias,’ Joy Buolamwini, MIT

Colorado AI Act (2024). SB 21-169. Colorado Legislature.

D.C. Bar (2024). Ethics Op. 388: Use of AI in Law Practice.

Florida Bar (2024). Advisory Ethics Op. 24-1.

Kentucky Bar Ass’n (2024). Ethics Op. KBA E-457.

Missouri Bar (2024). Informal Advisory Ethics Op. 2024-11.

N.D. Texas (2023). Judge Brantley Starr’s Standing Order on Generative AI.

Park v. Kim, 2024 WL 332478 (2d Cir. Jan. 30, 2024).

Pennsylvania & Philadelphia Bar (2024). Joint Formal Op. 2024-200.

People v. Crabill, 2023 WL 8111898 (Colo. O.P.D.J. Nov. 22, 2023).

State Bar of Texas (2025). Texas Center for Legal Ethics Op. 705.

U.S. Court of International Trade (2023). Judge Stephen Vaden’s AI Order.

White & Case (2024). AI Watch: Global Regulatory Tracker – United States. Retrieved from White & Case website.


Summary Table of Key AI-Ethics Developments

Below is a high-level snapshot of some important ethical developments, standing orders, and cases from 2023 to 2025. Each entry highlights the authority, the rule or opinion, and the date. Keep in mind that new orders and opinions continue to emerge quickly.

Authority Rule/Opinion/Case Date Key Points
ABA – Model Rules Model Rule 1.1, Comment 8 (Tech Competence) Aug 2012 Lawyers must maintain technological competence, understanding “the benefits and risks associated with relevant technology.” Adopted by most states into their own rules.
ABA Formal Opinion 512, Generative Artificial Intelligence Tools July 2024 Emphasizes existing ethical duties (competence, confidentiality, communication, candor, supervision) fully apply to AI use. Warns of AI “hallucinations” (false output) and the need to verify all results.
Florida Bar Advisory Ethics Op. 24-1 Jan 2024 Lawyers may use GenAI with safeguards for confidentiality. Must protect client info, supervise AI, ensure accurate citations. Requires disclosure and client consent if there is a risk of exposing confidential data.
Kentucky Bar Ethics Op. KBA E-457 Mar 2024 Stresses the duty to remain educated about AI tools, verify AI outputs, and handle confidentiality. Mentions adjusting fees if AI dramatically reduces attorney time.
D.C. Bar Ethics Op. 388 Apr 2024 Attorneys must understand AI’s limits and verify outputs. If confidential data is provided to AI, must confirm the tool’s security. Recommends attorneys keep records of AI prompts and responses as part of client files.
Pennsylvania & Philadelphia Bar Joint Formal Op. 2024-200 2024 Lawyers must verify all AI-suggested citations. Informed client consent recommended. AI cannot replace attorney judgment.
Texas Center for Legal Ethics Ethics Op. 705 Feb 2025 Clarifies generative AI falls under “technological competence.” Requires due diligence on confidentiality. Suggests if AI significantly shortens a task, attorneys should not overbill clients.
Missouri Informal Ethics Op. 2024-11 Apr 2024 Encourages lawyers to vet AI platforms for confidentiality and accuracy. Advocates implementing internal firm AI policies.
West Virginia LDB Opinion on AI in Law June 2024 AI should supplement, not replace, a lawyer’s reasoning. Strongly advises informed client consent.
Federal Courts – N.D. Texas Judge Brantley Starr’s Standing Order May 2023 Requires attorneys to certify whether filings were drafted by AI, and if so, that a human checked all references and quotations. Noncompliance can lead to striking the filing.
Federal Courts – N.D. Illinois Judge Gabriel Fuentes’ Standing Order June 2023 Mandates disclosure of any generative AI use in drafting court filings. Attorneys must specify the AI tool used. Aims for transparency to the court.
Federal Courts – U.S. Ct. of Int’l Trade Judge Stephen Vaden’s AI Order Oct 2023 Requires disclosure of any AI tool used, which portions of text are AI-generated, and a certification that no confidential or privileged info was disclosed to the AI.
Federal Courts – E.D. Pennsylvania Judge Michael Baylson’s Standing Order Nov 2023 Broad AI disclosure rule: attorneys must disclose any AI usage in court filings, including more traditional e-discovery or research algorithms. Seeks maximum transparency.
Federal Courts – N.D. California Mag. Judge Peter Kang’s Standing Order Jan 2024 Requires disclosure only for generative AI, not ordinary tools like word processors or standard legal research. Also warns attorneys about confidentiality concerns.
Notable Case Mata v. Avianca (S.D.N.Y.) June 2023 Two attorneys sanctioned for citing nonexistent cases generated by ChatGPT. Sparked a wave of new AI-disclosure orders.
Notable Case Park v. Kim (2d Cir.) Jan 2024 Another AI-fabricated case citation surfaced. The attorney was referred to the court’s Grievance Panel.
Notable Case People v. Crabill (Colo. Disc. Ct.) Nov 2023 Colorado attorney disciplined (suspension) for filing a motion containing fake case law from ChatGPT.
State Legislation e.g., Colorado AI Act (SB 21-169) May 2024 States enacting AI laws that can impact “high-risk AI systems.” Not specifically directed at lawyers, but attorneys advising clients or using AI in regulated contexts must be aware.
Federal Activity Various proposals; no law yet 2023–2024 No comprehensive federal statute regulating lawyers’ use of AI. FTC and other agencies monitoring AI for unfair or deceptive practices.

This table offers a glimpse into the rapidly growing patchwork of AI-related ethics opinions, court orders, and legislative actions.