Glossary

Below are concise definitions of important terms, arranged alphabetically by chapter.

Chapter 1

AI Winters
Periods in the history of artificial intelligence when excitement and funding dried up due to unmet expectations, causing research progress to slow or stall.

AlphaGo
An AI program created by DeepMind that mastered the complex board game Go. Its victory over world champion Lee Sedol in 2016 demonstrated the power of combining deep learning with reinforcement learning.

AlphaZero
A successor to AlphaGo, also developed by DeepMind. Unlike AlphaGo, AlphaZero was not only able to learn Go from scratch but also mastered Chess and Shogi—all using self-play without domain-specific programming.

Artificial General Intelligence (AGI)
A theoretical form of AI capable of understanding or learning any intellectual task that a human being can, rather than being limited to one domain (e.g., image recognition or language translation).

Artificial Intelligence (AI)
A field of computer science focused on creating machines or software that can perform tasks commonly associated with human intelligence, including reasoning, pattern recognition, and problem-solving.

ChatGPT
A user-friendly chatbot interface powered by a Generative Pre-trained Transformer (GPT). Launched in late 2022, it brought AI-generated text into the mainstream by allowing anyone to type queries in natural language and receive rapid, coherent responses.

Combinatorial Explosion
A phenomenon where the number of possible scenarios or outcomes grows exponentially as more variables or conditions are added, making purely rule-based systems unwieldy for complex tasks.

Conditional Logic
A traditional programming approach that uses explicit “if-then” statements to determine outcomes based on specified conditions, effective for simple tasks but prone to exponential complexity when dealing with nuanced scenarios.

Dartmouth Conference
A 1956 gathering of computer scientists, including John McCarthy and Marvin Minsky, often regarded as the birthplace of AI as a formal field of study.

Generative AI
A branch of AI that creates new, original content, such as text, images, or audio, by learning patterns from large datasets, rather than merely classifying or retrieving existing data.

IBM Watson
An AI system that gained fame by winning the quiz show Jeopardy! in 2011. Watson used a combination of natural language processing and machine learning to interpret questions and find correct answers within seconds.

Machine Learning
A subset of AI in which algorithms learn patterns from data rather than being explicitly programmed with rules. This underpins much of modern AI, including generative and predictive models.

Neural Networks
Computational models inspired by the human brain’s interconnected neurons. They excel at pattern recognition tasks and form the basis of deep learning approaches that power many advanced AI applications.

Transformer
A neural network architecture known for its “attention mechanism,” which allows it to process entire sequences of data (like sentences) in parallel. Models such as GPT are based on this architecture, enabling sophisticated language understanding and generation.


Chapter 2

Bias
In machine learning, this is an unwanted preference or skew in the model’s outputs, often resulting from unrepresentative or flawed training data.

Deep Learning (DL)
A specialized branch of machine learning using multi-layered neural networks that learn complex patterns from large volumes of data.

Embedding
A compact numerical representation of words (or other items) where similar meanings or contexts end up close together in “embedding space.”

Extrapolation
A model’s attempt to handle completely new or unseen scenarios beyond its training data, something large language models typically struggle with.

Garbage In–Garbage Out (GIGO)
The principle that if a model is trained on low-quality or biased data, it will produce low-quality or biased results.

Gradient Descent
A training method where the model incrementally adjusts parameters by “stepping downhill” to reduce errors in its predictions.

Large Language Model (LLM)
A neural network with billions of parameters, trained on massive amounts of text to predict the next word in a sequence.

Parameter
A numerical value (weight or bias) in a machine learning model that gets tweaked during training to improve the model’s performance.

Perceptron
The simplest form of an artificial neuron, taking inputs, applying weights, and outputting a “yes” or “no” decision based on a threshold.

Reinforcement Learning (RL)
An approach where an AI “agent” learns by trial and error, receiving positive or negative feedback as it interacts with its environment.

Scaling Hypothesis
The idea that bigger models, more data, and more computational power lead to increasingly capable AI systems.

Transformer Architecture
A neural network design that uses “attention” mechanisms to process entire sequences (like sentences) in parallel, greatly improving efficiency.

Vector
A list of numbers representing information (e.g., a word), where each dimension corresponds to a particular attribute or context.


Chapter 3

Agent
A semi-autonomous AI system that can perform tasks end-to-end with minimal human oversight, such as automating client intake or drafting legal documents.

Anthropic Sonnet 3.5 (Claude)
A large language model designed by Anthropic, focusing on ethical alignment and strong reasoning capabilities with a large context window.

Canvas
A collaborative editing space in ChatGPT that appears next to the chat window, making it easier to draft, edit, and refine text or code with AI assistance.

Chain of Thought
A reasoning process used by some AI models to “think” step-by-step before providing an answer, improving the quality of complex or analytical responses.

Constitutional AI
A training approach aimed at aligning AI outputs with human values and ethical guidelines, reducing harmful or biased content.

Context Window
The maximum amount of text or tokens an AI model can hold in mind at once, allowing it to maintain continuity in discussions and document analysis.

Copilot
An AI tool that supports human users by generating ideas or first drafts, always requiring final human review and approval.

GPT-4o
An advanced, multimodal AI model capable of handling text, images, audio, and video, noted for its fast responses and broad range of applications.

Large Language Model (LLM)
A sophisticated AI system trained on vast amounts of text, enabling it to generate coherent responses, summarize information, and perform various language tasks.

Multimodal
Describes AI capable of handling multiple data types, such as text, images, audio, or video, rather than just one.

NotebookLM
An AI-powered research and note-taking tool from Google, designed to summarize long documents and create study guides with inline citations.

o1
A reasoning-oriented AI model introduced in September 2024, known for its internal chain of thought that helps it tackle complex tasks more accurately.

o3
A successor to o1 with an enhanced private chain of thought, allowing for more advanced planning and problem-solving, useful in challenging legal or scientific scenarios.

Open-Source LLM
A large language model whose code or model weights are publicly available, enabling greater customization and transparency but requiring technical expertise.

Perplexity
An AI-powered search engine that combines conversational abilities with source attribution, useful for quickly finding and citing relevant legal information.

Predictive AI
A model designed to produce rapid, pattern-based responses without extensive logical reasoning, ideal for tasks like quick drafting or summarization.

Private Chain of Thought
An internal reasoning process in certain advanced AI models (like o3), allowing the AI to plan and deliberate before arriving at an answer.

Proprietary LLM
A large language model owned and operated by a specific company, often offering user-friendly features but with restricted access to its underlying code.

Reasoning AI
A model designed to approach tasks with a multi-step reasoning process, excelling in more complex tasks like detailed legal analysis or problem-solving.

Reinforcement Learning
A technique where an AI is rewarded or penalized for its actions, gradually guiding the model toward more accurate or efficient outcomes.

Tasks
A ChatGPT feature that schedules future actions or reminders, such as daily updates, making the AI more proactive in assisting users.

Vendor Lock-In
A situation where reliance on a specific AI provider makes it difficult to switch platforms or customize solutions without incurring costs or disruptions.

Vision Mode
A capability in ChatGPT (GPT-4o) allowing the AI to interpret and analyze visual content, potentially useful for examining images, diagrams, or video evidence.


Chapter 4

Bias and Fairness
Refers to how AI systems can perpetuate or amplify prejudices found in historical data, and the ethical responsibility to ensure technology treats all parties equitably.

Confidentiality
A lawyer’s ethical obligation to keep client information private and secure, especially crucial when uploading data to AI platforms.

Data at Rest
Information that is stored (e.g., on a server or hard drive) and not actively moving through the network. Often protected by encryption to ensure confidentiality.

Data in Transit
Information that is actively moving from one location to another (e.g., uploading files to an AI tool), requiring secure channels to prevent interception or unauthorized access.

Domain-Specific Corpus
A collection of specialized documents (like statutes, case law, and contracts) used to train or refine an AI model for better performance in a particular field, such as law.

Encryption
A method of securing data, both at rest and in transit, by transforming it into unreadable code that can only be deciphered with the correct key.

eDiscovery
The process of identifying, collecting, and reviewing electronically stored information (such as emails or digital documents) for use in legal proceedings.

Ethical Compliance
Adhering to professional rules and standards, like confidentiality and competence, when integrating AI into legal work.

Human in the Loop
An approach requiring a human (e.g., a lawyer) to supervise or validate AI-generated outputs, preserving ultimate responsibility for legal decisions.

Knowledge Management
Systems or processes designed to capture, organize, and retrieve a law firm’s work product (briefs, memos, templates) for reuse in future cases.

Large Language Model (LLM)
An AI model trained on massive amounts of text data, capable of generating or summarizing language in a human-like manner (e.g., GPT-4).

Legal Domain Expertise
Refers to AI models trained specifically on legal sources (court opinions, statutes, regulations) to enhance their accuracy in law-related tasks.

Natural Language Processing (NLP)
A branch of AI focused on enabling computers to understand, interpret, and generate human language in a meaningful way.

Predictive Coding
An eDiscovery technique where a small sample of reviewed documents “teaches” the AI to categorize relevance across a large dataset, speeding up document review.

SOC 2 Compliance
A security standard indicating that a technology vendor follows audited procedures to protect data from unauthorized access and vulnerabilities.

Unauthorized Practice of Law
Engaging in law-related tasks or offering legal advice without a license, which can occur if AI tools operate without attorney supervision or disclaimers.

Vendor Reliability
A measure of a technology provider’s trustworthiness, based on factors like financial stability, market reputation, and ongoing product support.

Workflow Integration
How easily an AI tool merges with a law firm’s existing systems (e.g., document management, billing software) to streamline overall operations.


Chapter 6

AI Assistant Mindset
A perspective that treats AI like a capable research associate rather than an all-knowing oracle, meaning the user provides clear guidance, oversees the output, and verifies any important information.

Chain-of-Thought Prompting
A prompting technique that encourages the AI to break down its reasoning step by step. Although not always necessary for newer models, it can help clarify complex tasks or multi-step questions.

Enhanced RAG
An improved approach to Retrieval-Augmented Generation that may involve breaking down the user query into sub-queries, performing multiple retrieval passes, or re-ranking documents to find the most relevant information before generating an answer.

Few-Shot Prompting
A method where you include one or more examples in the prompt to show the AI the style or format you want. It helps the AI produce more accurate and relevant responses by illustrating exactly what you’re looking for.

Garbage In, Garbage Out
A saying that highlights how the quality of an AI’s response depends on the clarity and accuracy of your prompt. Poorly formulated questions often lead to poor answers.

Hallucination
When an AI confidently provides information that is entirely fabricated or incorrect. This can include made-up case citations, statutes, or any other “invented” data.

Informed Prompt
A well-structured prompt that provides clear context, specifics (such as jurisdiction or document type), and any other instructions needed for the AI to generate a focused, accurate response.

Iterative Prompting
An approach where you refine your prompt and interact with the AI in multiple rounds. You adjust your queries based on the AI’s previous answers, gradually homing in on the best possible result.

Knowledge-Augmented Generation (KAG)
A method that adds structured data (like a knowledge graph) to the retrieval process, helping the AI make more logically consistent or factually correct connections between pieces of information.

Knowledge Graph
A structured representation of facts and relationships (e.g., cases that overrule or modify other cases). AI models can use knowledge graphs to interpret connections more reliably than by text alone.

Meta-Prompting
Using the AI to help create or refine the prompt itself. You might ask the AI, “How should I prompt you to get the best answer on X?” and then use its suggestions in the final prompt.

Naive Prompt
A brief or vague prompt that leaves most of the interpretation to the AI, often resulting in generic or incomplete answers.

Naive RAG
A basic form of Retrieval-Augmented Generation where only a single set of documents is retrieved and passed to the AI. It can miss nuances or multi-part details if the query is complex.

Oracle Mindset
An outlook that views AI as an infallible source of truth. This can be risky, as users may accept AI outputs without verifying accuracy, leading to errors or misuse.

Prompt Engineering
The practice of carefully crafting prompts to guide an AI system toward producing more accurate, contextually relevant, and useful outputs.

Prompt Framework
A structured template or set of guidelines (like RTF, RISEN, or CRAFT) that helps you include all the essential details, such as context, role, and format, when formulating a prompt.

Reranking
A technique in retrieval systems where multiple documents are initially retrieved, then scored and sorted by relevance. Only the top-scoring documents are fed into the AI, improving answer quality.

Retrieval-Augmented Generation (RAG)
A strategy where the AI looks up external information, such as case law or statutes, before producing a response. This helps reduce errors and ensures the content is up to date.

Single-Shot Prompting
Providing the AI with a single prompt, without any examples or follow-up instructions, and relying on the model’s built-in knowledge to interpret the question.

Sub-Queries
Smaller, targeted questions used to handle complex or multi-faceted requests. The system retrieves or processes each sub-query separately, then combines the findings for a more comprehensive final answer.

Vector Database
A specialized database that stores text as numerical “vectors,” enabling semantic searches. Instead of matching exact words, it finds contextually similar passages, even if they use different terminology.


Chapter 7

Alternative Fee Arrangements (AFAs)
A pricing model that differs from the traditional hourly billing, such as flat fees or performance-based fees, often to reflect AI-driven efficiencies.

Billable Hour
A traditional law firm billing method where attorneys charge clients based on the time (in hours) spent working on a matter.

Contract Analysis
The process of reviewing and evaluating contract language and clauses to identify risks, obligations, and opportunities for revision.

Discovery
In litigation, the phase where parties exchange documents and information. AI-based tools can accelerate review of large document sets.

Due Diligence
A comprehensive review of contracts and documents to assess legal risks, commonly aided by AI for faster analysis of large volumes of data.

Flat Fee
A set price charged to a client for a particular service, regardless of hours spent, frequently offered for AI-assisted services.

Human in the Loop (or Lawyer in the Loop) A workflow where humans oversee and refine AI outputs, ensuring accuracy, ethical compliance, and contextual judgment.

Jevons’ Paradox
When technological efficiency lowers the cost of a product or service, overall consumption may rise rather than fall, potentially expanding the demand (and thus the workforce) instead of shrinking it.

Moravec’s Paradox
Tasks that seem complex to humans (like advanced reasoning) can be easier for AI, while “simple” human tasks (e.g., empathy, instinctive perception) can be harder for machines.

Moravec’s Irony
A twist on Moravec’s Paradox, describing how lawyers fear being replaced by AI yet also desire an instant, “easy button” solution to offload routine tasks, highlighting the conflict between wanting automation and preserving human roles.

Performance-Based Pricing
A fee structure where payment depends on achieving certain results or milestones, reflecting the efficiency gains from AI.

Shifting Lawyer Roles
The move from routine research and drafting tasks to higher-level strategic and supervisory duties, spurred by AI automation.

Value-Based Billing
A fee model aligning the cost of legal services with the outcome or value delivered, rather than time spent.

Workforce Optimization
Adjusting staffing needs, such as fewer junior associates, due to AI taking over routine tasks, allowing lawyers to focus on complex work.


Chapter 8

Algorithmic Bias
Systemic prejudice embedded in an AI’s processes and outputs, often caused by flawed or unrepresentative training data and coding assumptions.

Candor to the Court
A lawyer’s obligation to be honest and forthright with judges and tribunals, never misleading them with false statements or evidence.

Client Confidentiality
The ethical duty to protect all information related to a client’s representation from unauthorized disclosure.

Cognitive Bias
Human prejudices or unconscious beliefs that influence how data is selected, weighted, or interpreted in AI systems.

Coded Bias
A term popularized by researcher Joy Buolamwini (and her documentary of the same name) highlighting how AI tools can systematically discriminate against underrepresented groups when the underlying data or code is flawed.

Competence
A lawyer’s responsibility to provide knowledgeable and skillful representation, including staying updated on relevant technology.

Court Orders
Legally binding directives from judges, which may require attorneys to disclose or certify AI usage in their filings.

Disclosure
The act of revealing information, such as when lawyers inform a client or a court that AI has been used in preparing legal documents.

Ethics Opinion
Official guidance from a bar association or similar body interpreting how existing professional rules apply to specific scenarlearned from large datasets.

Informed Consent
Permission a client gives after being fully advised of the risks, benefits, and alternatives, such as when confidential data might be shared with an AI platform.

Model Rules of Professional Conduct
Guidelines developed by the American Bar Association that many states adopt or adapt as their ethics rules for lawyers.

Predictive Policing
The use of AI tools to forecast crime locations or frequency, often criticized for perpetuating historical biases in law enforcement data.

Sanctions
Penalties imposed by a court or disciplinary authority on lawyers who violate legal or ethical obligations (e.g., for submitting AI-generated fake citations).

Supervision
A lawyer’s duty to oversee associates, staff, and AI tools to ensure all work meets ethical and professional standards.

Technological Competence
A requirement under many bar rules that lawyers understand the benefits and risks of technology, including AI, to provide competent representation.

Training Data Bias
A situation where the dataset used to teach an AI system over-represents or under-represents certain groups, leading to skewed outputs.

Verification
The process of checking and confirming the accuracy of information, particularly crucial for AI-generated research, citations, or legal documents.


Chapter 9

Access to Justice (A2J)
Refers to the ability of individuals to obtain fair legal assistance and effectively participate in the legal system, regardless of income, location, or other barriers.

Bias
In the context of AI and law, it means producing unfair outcomes or reinforcing stereotypes, often due to flawed data or assumptions in design or training.

Case Triage
A process of quickly assessing incoming legal matters to determine urgency, complexity, and the best path forward, often used by courts or legal aid offices to manage heavy caseloads.

Digital Divide
Describes the gap between people who have reliable internet and devices versus those who do not, limiting participation in online legal services or remote hearings.

Justice Gap
The mismatch between the legal needs of low-income or vulnerable communities and the resources available to meet those needs, leaving many unrepresented or unsupported.

Legal Desert
A rural or remote area with few or no practicing attorneys, making it difficult for residents to access legal counsel close to home.

Legal Services Corporation (LSC)
A federally funded nonprofit that provides support to civil legal aid programs in the United States, helping low-income people address critical legal problems.

Pro Bono
Legal work performed voluntarily and without payment to assist those who cannot afford an attorney, often for the public good.

Public Defender
An attorney employed or appointed by the government to represent criminal defendants who cannot afford private counsel, ensuring a fair trial.

Self-Help
Resources or tools allowing individuals to handle legal matters without a lawyer, such as standardized forms or how-to guides.

Self-Represented Litigant (SRL) or Pro Se
An individual who navigates a legal case without a lawyer, relying on personal research, online information, or limited professional guidance.

Unauthorized Practice of Law (UPL)
Occurs when someone who is not a licensed attorney provides specific legal advice or services, regulated by state bar rules to protect the public.


Chapter 11

Baseline Metrics
Initial measurements that capture the current state (e.g., time spent on tasks, cost per matter) before introducing new processes or tools.

Change Management
A structured approach for guiding organizations through transitions, ensuring that new methods, such as AI adoption, are integrated smoothly and sustainably.

Cross-Disciplinary Collaboration
Cooperation among diverse professionals (e.g., lawyers, IT staff, data specialists) who combine their expertise to implement AI solutions effectively.

Dedicated AI Team
A designated group or individual responsible for overseeing AI initiatives, from selecting tools to training staff and monitoring outcomes.

Lifelong Learning
The ongoing process of acquiring new knowledge and skills throughout one’s career, crucial for adapting to evolving technologies and practices.

Pilot Projects
Small-scale, focused experiments with AI that allow teams to test impact, gather feedback, and build confidence before wider implementation.

Reframing
A psychological technique for changing perceptions so that potential threats, like AI in law, are seen as opportunities for augmentation and growth.

Resistance to Technological Change
Hesitation or pushback against adopting new systems due to fear of competency loss, disruptions to routine, or uncertainty about outcomes.

Return on Investment (ROI)
A measure of the financial or strategic gains from an AI initiative compared to its cost, indicating how effectively resources are used.

Self-Efficacy
Confidence in one’s ability to learn and succeed with new tools or methods, influenced by experiences, peer examples, and positive reinforcement.