Chapter 8: Exercises

Below are five exercised designed to illustrate how generative AI could raise various ethical dilemmas in different legal environments as discussed in Chapter 8.

How to Use These Exercises

Get Started:

  1. Create a new notebook in NotebookLM. (Use https://notebooklm.google.com/).
  2. Upload the PDF of Chapter 8 into NotebookLM. (This is to give the notebook of the chapter you are studying.)

Exercises:

  1. Copy/Paste each scenario into your the NotebookLM's chat.
  2. Copy/Paste the prompt under the scenario and review the AI’s response.
  3. Review the AI’s ethical issue spotting and its analysis.
  4. Reflect on any ethical issues the AI did not discuss, particularly around confidentiality, competence, bias, supervision, and candor. What did it get wrong? What did it get right? What do you think? Would you feel comfortable with this analysis? What else would you want to know about the scenario?

Exercise 1: AI-Assisted Document Review in Civil Litigation

Scenario
Greenwood & Blythe LLP represents a large pharmaceutical manufacturer facing a products-liability lawsuit. A second-year associate, feeling overwhelmed by the volume of discovery, uses a free online AI tool to organize and summarize thousands of internal emails. The emails include references to the company’s secret formula for a newly patented medication, user complaints about side effects, and staff discussions about potential regulatory hurdles. In order to “speed things up,” the associate uploads entire email threads, containing employees’ names, personal health remarks, and references to the proprietary formula, without consulting any senior partner or checking whether the AI platform saves or shares user inputs. The AI tool also seems to highlight certain employee emails more than others, though it’s unclear why. The associate relies heavily on the AI’s summaries without verifying accuracy.

Prompt
Based on Chapter 8, identify all the ethical issues presented in this scenario related to AI use.

Reflection
After reading NotebookLM’s response, note any additional concerns, particularly issues around confidentiality, verification, or potential bias in how the AI categorizes documents, that you think the AI may have missed. What did it get wrong? What did it get right? What do you think? Would you feel comfortable with this analysis? What else would you want to know about the scenario?


Exercise 2: In-House Counsel & Automated Contract Drafting

Scenario
Patricia is general counsel at a fast-growing software startup that recently implemented an advanced AI tool to draft new vendor agreements. This AI platform pulls language from a vast, publicly sourced dataset of prior contracts across many industries. Patricia places full trust in the AI’s initial drafts, often forwarding them to counterparties with only minimal edits. She also notices that the tool sometimes produces clauses excluding certain smaller or foreign-based suppliers. Company managers have asked Patricia to expedite contract processing, and she sees the AI as a perfect solution for speed, without mentioning any potential risks to stakeholders. A few employees have raised questions about whether the AI’s default language might inadvertently discriminate against smaller vendors or create hidden liabilities.

Prompt
Based on Chapter 8, identify all the ethical issues presented in this scenario related to AI use.

Reflection
Which parts of the scenario might raise red flags concerning competence, bias, client communication, or supervision? Did NotebookLM identify all of them? What did it get wrong? What did it get right? What do you think? Would you feel comfortable with this analysis? What else would you want to know about the scenario?


Exercise 3: Law Firm Chatbot for New Client Intake

Scenario
A mid-sized personal-injury law firm installs a chatbot on its website, hoping to attract more cases. The chatbot is designed to collect preliminary information from potential clients, such as the nature of their injury, approximate date of accident, and medical details. However, the chatbot also asks about personal demographics, like age and employment status, and sometimes makes comments like “Your case might not qualify for our services.” The firm did not add any disclaimers indicating that the chatbot is not a licensed attorney. Nor did they fully configure the tool’s privacy settings to ensure that personally identifiable information is secure. Moreover, the law firm’s leadership is unaware that the AI’s decision-making might be skewed by patterns in its training data, potentially turning away valid claims or favoring certain user profiles.

Prompt
Based on Chapter 8, identify all the ethical issues presented in this scenario related to AI use.

Reflection
Review NotebookLM’s analysis. Consider whether any issues regarding advertising ethics, confidentiality, bias, or unauthorized practice of law were overlooked. What did it get wrong? What did it get right? What do you think? Would you feel comfortable with this analysis? What else would you want to know about the scenario?


Exercise 4: Criminal Defense Attorney Using AI for Sentencing

Scenario
Damien, a public defender juggling a heavy caseload, relies on a generative AI tool to draft sentencing memoranda. He inputs detailed client histories, including records of prior convictions, childhood trauma, and mental health diagnoses. While the AI output is polished, Damien notices the AI occasionally references nonexistent case precedents or quotes from real cases but attributes them to the wrong jurisdictions. Pressed for time, he often adopts the AI’s recommended arguments word-for-word. Damien also wonders if the AI might unintentionally emphasize harsher sentencing factors based on biased training data but continues using it “to keep up with deadlines.”

Prompt
Based on Chapter 8, identify all the ethical issues presented in this scenario related to AI use.

Reflection
Did NotebookLM touch on possible confidentiality breaches, errors in citations, or bias in sentencing recommendations? What additional concerns can you identify? What did it get wrong? What did it get right? What do you think? Would you feel comfortable with this analysis? What else would you want to know about the scenario?


Exercise 5: Bias in Loan Approval Compliance

Scenario
An in-house legal compliance team at EquiSafe Credit Corp. deploys a generative AI model to scan loan applications for red flags under consumer protection laws. They feed it historical lending data, which includes years of approvals and denials that may reflect past discriminatory lending patterns. The AI starts flagging applications from particular zip codes with higher rejection rates, even when applicants meet the stated credit criteria. The compliance lawyers assume the tool is simply “efficient,” failing to investigate whether the algorithm is systematically biased based on location or demographic proxies, such as household size or surname origin. They present the AI’s findings to company leadership without mentioning any ethical or legal risks.

Prompt
Based on Chapter 8, identify all the ethical issues presented in this scenario related to AI use.

Reflection
Compare NotebookLM’s list of issues to your own. Pay special attention to whether it covers both discrimination risks (bias) and any duty to correct or disclose potentially unlawful practices. What did it get wrong? What did it get right? What do you think? Would you feel comfortable with this analysis? What else would you want to know about the scenario?