AI Use is Promising Yet Risky for Government Subpoenas and CIDs

AI has the potential to become an important tool in improving the time-consuming and expensive process of responding to government subpoenas or civil investigative demands (CID). Before moving headlong into this possible avenue, companies should consider the benefits and risks of using AI.

Potential Benefits

Responding to a subpoena or CID can be resource intensive. Relevant custodians must be identified and their materials collected. Such materials will need to be reviewed for responsiveness and produced to the government. At the same time, companies are identifying potential areas of exposure and deciding whether or how to address this with the government.

AI shows great promise in helping streamline these tasks. AI search tools have the potential to help identify responsive documents without needing to interview employees or conduct lengthy reviews. Chatbots could be used to help highlight areas of exposure based on the subpoena or CID requests. Chatbots can also help craft substantive responses. In short, AI can help individuals complete certain tasks more efficiently, which allows the legal team and other personnel to handle pressing matters more capably.

Potential Downsides

What are the downsides of using AI to respond to government subpoenas or CIDs? One concern is that replacing the human actor with AI could upend attorney-client privilege and work product protections.

For instance, are the prompts input into the AI tool considered attorney-work product? Most would argue yes assuming an attorney, or someone working at the direction of an attorney, crafts the prompts. But what about the AI outputs? The government may seek to obtain the outputs on the grounds that they’re facts—facts, as opposed to legal opinions, are generally not privileged.

The government could also argue that AI outputs are not protected because attorney work product protections only cover the work product of humans. The Copyright Office has been struggling with a similar issue: determining how much human input is required to trigger copyright protections for works created largely or solely by AI. Depending on how the Copyright Office decides this issue, it may create a precedent that AI-produced works either do or don’t warrant the same protections afforded to humans. Accordingly, AI outputs may not have protections usually enjoyed by attorneys.

Beyond privilege concerns, the government’s involvement in a company’s response to a subpoena or CID may increase with the availability of AI tools. If the government learns that a company has AI capabilities, it may seek to co-opt these tools to run their own queries and prompts. It’s common for the government to review search terms used to identify responsive documents and to even propose their own search terms for a company to run across a document database. Requesting companies to run certain AI prompts may seem to the government as merely an extension of this common practice. Most companies likely wouldn’t be comfortable allowing the government access to such tools.

Managing Risks

How should a company minimize these potential risks? While some companies may consider simply not adopting any AI tools, that’s likely not a viable option for the long term, or even for the short terms. One potential option is to cabin the use of AI to certain departments or otherwise curtail functionality so the legal department is not directly using AI. Establishing firewalls around the legal department could avoid some of the issues identified above while still providing AI benefits to other departments within the company.

Another option is to leverage outside counsel to take the lead on responding to government requests. A company could grant outside counsel temporary access to any internal AI tools so the benefits of AI could be realized in responding to the subpoena or CID. At the same time, outside counsel would likely be better positioned to push back against the government regarding any aggressive demands related to AI use.

One thing is clear: AI holds incredible promise, but companies must ensure that the use of AI is appropriately supervised, and their outputs closely reviewed and monitored, by humans, including attorneys, to minimize associated risks.

This article was reproduced with permission. Published Oct. 17, 2023. Copyright 2023 Bloomberg Industry Group 800-372-1033. www.bloombergindustry.com.

***

Wiley’s Privacy, Cyber & Data Governance and Artificial Intelligence (AI) teams assists clients with government advocacy, as well as compliance and risk management approaches to privacy compliance, AI technology and algorithmic decision-making, including on compliance and risk management issues for generative AI. Please reach out to any of the authors with questions.

Wiley Connect

Sign up for updates

Wiley Rein LLP Cookie Preference Center

Your Privacy

When you visit our website, we use cookies on your browser to collect information. The information collected might relate to you, your preferences, or your device, and is mostly used to make the site work as you expect it to and to provide a more personalized web experience. For more information about how we use Cookies, please see our Privacy Policy.

Strictly Necessary Cookies

Always Active

Necessary cookies enable core functionality such as security, network management, and accessibility. These cookies may only be disabled by changing your browser settings, but this may affect how the website functions.

Functional Cookies

Always Active

Some functions of the site require remembering user choices, for example your cookie preference, or keyword search highlighting. These do not store any personal information.

Form Submissions

Always Active

When submitting your data, for example on a contact form or event registration, a cookie might be used to monitor the state of your submission across pages.

Performance Cookies

Performance cookies help us improve our website by collecting and reporting information on its usage. We access and process information from these cookies at an aggregate level.

Powered by Firmseek