Generative artificial intelligence is gaining attention in the legal profession. AI use related to discovery in litigations raises interesting prospects. But lawyers must understand the risks and ensure their clients’ documents are protected when produced in discovery.
Although many lawyers have used AI tools for more than a decade, the advent of generative AI has changed the scope, abilities, and power of those tools. Attorneys now implement AI-powered tools designed to streamline and assist review, coding, and even digesting large batches of documents and other discovery materials such as emails and chat messages.
In litigation, parties frequently enter into confidentiality stipulations, or a court will issue protective orders to govern the production, exchange, and review of documents and information that are confidential.
Typically, these stipulations and/or orders outline who is allowed to review discovery labeled “confidential” or “attorneys’ eyes only,” how such discovery can be used during litigation, and what should be done with it at the end of the case.
AI tools should prompt litigants to ask serious questions, including: When should attorneys consider using AI when crafting confidentiality stipulations and/or proposed protective orders, and how?
Lawyers and clients understandably want to explore use of generative AI to streamline the discovery process. Legal discovery can be time-consuming and expensive, depending on the complexity of a particular case, and AI can help perform discovery functions such as reviewing electronically stored information and creating summaries of documents or sets of data.
For example, many e-discovery vendors offer generative AI programs that can summarize batches of documents, a task that might otherwise take hours to perform, and help weed out documents that don’t need to be reviewed by a human at all.
Many companies offer safe, secure, and encrypted platforms that incorporate generative AI technology to protect client data. But many publicly available generative AI programs aren’t encrypted and data input to them could become publicly available.
For some public-facing generative AI programs, it’s possible that the information you input will be stored and used in responses to questions posed by other users or incorporated into the large language model that trained the program.
Financial institutions and other companies have banned their employees from using public programs such as ChatGPT for work-related tasks for that reason. They want to ensure sensitive information isn’t inadvertently exposed to the public.
In the litigation context, parties should consider if they need to adapt their confidentiality stipulations or proposed protective orders to address the issues generative AI poses.
For instance, you might present one of your client’s sensitive documents, marked “confidential,” to your adversary. A typical confidentiality stipulation provides that such documents can’t be shared with “any person or entity” except personnel of the parties engaged in assisting with the case, counsel for the parties, expert witnesses, court and court personnel, court reporters, and testifying witnesses.
What happens if your adversary feeds the confidential document’s contents into a publicly available generative AI program, such as ChatGPT, and asks it to summarize the document? The AI program is certainly not a “person” and arguably not an “entity,” so your adversary may not have violated the plain text of the typical confidentiality stipulation. Your client’s confidential information could now be exposed, subverting the purpose of the confidentiality stipulation and document designation.
The client’s information may be at risk even in certain “closed” AI programs that aren’t accessible to the general public. For example, what if a law firm has a proprietary generative AI program that’s only accessible to employees at that law firm, and only has access to documents in the law firm’s internal, secure, encrypted database? That sounds pretty safe so far.
However, the law firm’s internal and closed AI program could regurgitate the contents of your client’s confidential document in responses given to other lawyers and non-lawyer staff in the firm, even if they aren’t involved in the litigation. This can be particularly concerning with respect to highly confidential information—the type usually given “attorneys’ eyes only” designation or further restricted to review by individuals specifically identified in the confidentiality stipulation or protective order.
Lawyers may want to take a hard look at the confidentiality stipulations and protective orders they used in the past, and consider risks of generative AI. There’s no one-size-fits-all approach, and the technology is changing rapidly, but there are several things lawyers can consider.
Some litigants may want to include a provision in their confidentiality stipulations that no “confidential” or “attorneys’ eyes only” document can be uploaded into a platform or program that has a generative AI component. This approach may be impractical given how many e-discovery platforms are incorporating generative AI into their products.
Another option is to include provisions in their confidentiality stipulations restricting use of generative AI to specified vendors and programs that they believe are secure and safe, or prohibiting the use of specific, public-facing generative AI programs.
Litigants may also want to include provisions requiring that “confidential” and “attorneys’ eyes only” documents are walled off from law firms’ internal, proprietary generative AI programs so that those programs can’t access the documents.
As generative AI improves and becomes more accessible, it may change the way many lawyers perform their work. It’s time for them to start thinking about how to address the new realities of the technology.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Owen R. Wolfe is partner and Eddy Salcedo is senior counsel at Seyfarth Shaw.
Summer associates Kimberly Garcia and Colin Smith also contributed to this article.
Write for Us: Author Guidelines