Generative AI Use In-House – Realizing the Potential While Avoiding the Hype

Implementation of Generative AI[i]

In an August 20, 2024, appearance on Bloomberg Technology, the Chairman and CEO of multinational cybersecurity company Palo Alto Networks, Inc. (“PANW”) estimated that 20% of employees within organizations are currently using or experimenting with the use of generative artificial intelligence technology. Further, he expressed that often organizations are unaware of their employees’ use or experimentation with generative artificial intelligence. As one of the largest providers of network security solutions, companies like PANW are uniquely placed to view generative AI’s actual use (or misuse). Based on the insights from such technology companies, prudence demands each organization take a proactive approach in developing a strategy for implementation of generative AI. In short, generative AI is likely being used within most organizations, so organizations must get ahead of – or least not farther behind – the use of generative AI within the organization.

As a starting point, each organization is unique and therefore must develop its individualized strategy for implementing generative artificial intelligence (“AI”). Unfortunately, there does not appear to be a cookie-cutter approach. Yet many organizations would benefit from a strategy that incorporates (a) the development of an AI governance framework and (b) the process by which that governance will be applied and continuously refined. Frequently an effective means of developing the AI governance framework and governance process is creation of an AI governance group or similarly focused group of dedicated stakeholders within the organization (collectively, “AI Governance Committee”).

The AI Governance Committee would typically include a core of individuals from the information technology department and the legal department as well as potentially the ethics office, internal and external communications, human resources, risk management, data specialists, and corporate security, among others. Obviously, deciding who to include within the core AI Governance Committee without expanding beyond what is effective will often be a challenge. But equally challenging is how lean organizations can develop sufficient breadth and depth in the AI Governance Committee to address the multitude of issues AI presents while (likely) simultaneously continuing to handle their respective pre-AI Governance Committee scopes of work.[ii] Additionally, beyond the core AI Governance Committee, subject matter experts should be included depending upon the specific use cases.[iii] Again, unfortunately, there is no silver bullet, but instead thoughtfully working through the issues.

A useful approach is for the core AI Governance Committee to attempt to develop, as well as constantly refine, the organization’s AI strategy, including the organization’s “Goldilocks” AI rules which attempt to prevent uses the organization deems improper while concurrently not stifling innovative and creative uses for a technology that, at minimum, appears able to dramatically increase productivity.[iv] Yet it should be kept in the forefront when developing the governing strategy that solving business issues is the focus, not merely implementation of technology for its own sake.

Beyond the strategy development, the AI Governance Committee has the critical role in developing, and the continuous refinement of, the AI governance framework for the organization to implement the organization’s strategy. The framework should be dynamic so as to allow adaptation to AI’s evolving and broadening uses as well as promote collaboration in the organization and openness to discussing uses and support for AI initiatives. However, the dynamism should be subjected to regular review, audit, and updating of governance practices. Additionally, the application of the governance framework likely should be effectuated through specifically defined processes.

Predictions about use and development of technology are inherently challenging, yet a few moments of reflection leads to the conclusion that likely many organizations will ultimately need to develop new positions such as Head of Organizational AI, Chief Data & Analytics Officer, or similar, as well as the inevitable supporting personnel, to address the uses and governance of AI.[v]

In-House Counsel’s Role in Generative AI

At the most fundamental level, in-house counsel needs to attempt to ensure that all AI used by and within the organization complies with governing laws and regulatory requirements. Clearly, even this fundamental duty of in-house counsel will be challenging because of the depth and breadth of AI uses, yet also because whether AI is being used in a given process may not be readily apparent even assuming the organization’s, its vendors’, and its customers’ good faith and wholly transparent efforts. Additionally, as the AI landscape continues to evolve rapidly, the AI Governance Committee, including in-house counsel, must establish a process for ongoing monitoring of AI-related legal and regulatory developments. Only with a systematic process can the organization, and more particularly in-house counsel, stay abreast of new court decisions, regulatory requirements and guidance, and industry best practices. Of course, the systematic process must both keep the AI Governance Committee informed of new decisions, rules, and regulations, as well as develop and maintain AI-specific compliance programs, including regular audits and assessments of AI systems to ensure they continue to meet legal and ethical standards.

AI uses will often raise issues involving intellectual property (both the organization’s IP and the use of external IP), risk management, procurement, compliance, communications, human resources, privacy and confidentiality (both internal and external to the organization), among potentially many others. The issues will arise under both United States law and laws governing other jurisdictions.[vi] In most organizations, in-house counsel is already enmeshed in these issues within the organization across many areas. As such, in-house counsel serving at the forefront of the organization’s AI strategy and governance is a natural, and likely absolutely necessary, fit.

However, beyond mere legal compliance, in-house counsel serving on the AI Governance Committee must work to develop a means of assessing risk, as well as actively working to manage the risk across the organization arising from the various groups within the organization’s uses of AI along with vendors’ and customers’ use of AI. Likewise, in-house counsel often is well placed to assist with, and frequently is a necessary primary player, in the development of ethics guidelines for the use of AI within the organization. Of course, whether discussing AI or traditional business issues, effective risk mitigation is not merely about technical safeguards, but also requires a holistic approach that encompasses governance, policy, and culture. Establishing clear lines of accountability, developing comprehensive usage guidelines, and fostering a culture of responsible AI use throughout the organization will likely be critical aspects of risk mitigation. It also requires a dive into vendor relationships, subjecting AI providers to intense scrutiny to ensure vendor’s practices align with the organization’s risk tolerance and ethical standards.

Given the considerable and comprehensive nature of in-house’s counsel’s likely tasks relating to the organization’s use of AI, a basic understanding of how the technology underlying generative AI (large language models) works will be advantageous, as is an acknowledgement that the operation of the underlying technology will continue to evolve, perhaps without extensive disclosure of such evolution or changes.[vii]

In sum, clearly the organization’s use of AI technologies presents a myriad of possibilities and benefits, but also a daunting wide minefield of legal and ethical challenges for in-house counsel, which will require intentional and thoughtful on-going analysis, ideally with the assistance of an AI Governance Committee.[viii] Further, as indicated, the effort will be incomplete and flawed if approached as a one-and-done necessity. Instead, an organization’s successful AI use likely hinges on a continuous process by engaged individuals (including in-house counsel) who are the core AI Governance Committee and who have been empowered to address and effectuate the successful development, strategy, and ongoing refinement of the organization’s use of AI. Moreover, the failure to undertake such a holistic approach seems almost certain to run afoul the risk arising from the unavoidable use of AI by and within the organization, or at minimum, individual actors within the organization.

With this effort in mind, an unscientific canvassing of in-house counsel over several months has shown that contract management software using AI is often an excellent and frequent starting point for the legal department’s own use of AI. Obviously, in-house attorneys must take a proactive role in managing an organization’s contracts with vendors, service providers, and customers so use of an AI enhanced contract management software has immediate and positive impact. While there is seemingly no end to solicitations pitching AI enhanced contract management software, there are several software products which seem to make the short-list for many organizations. It is outside the scope of this writing to make specific software recommendation, but the authors can connect in-house counsel with other in-house counsel who are willing to share their insights about software products they have vetted and those they are currently using.

The various AI enhanced contract management software have varying features and benefits, but many that seem to be moving to the forefront offer comprehensive, end-to-end solutions that integrate advanced AI capabilities throughout the contract’s lifecycle, including necessary reviews and signature authority. These leading platforms typically provide metadata extraction, which automatically pulls key information from contracts without extensive manual input or model training. They often include advanced clause comparison and risk rating capabilities, allowing for efficient analysis of non-standard language and potential legal issues. Natural language processing for data extraction from both structured and unstructured documents is another common feature, enhancing visibility and control over contract information. AI-powered analytics enable quick search, pattern recognition, and anomaly detection across large contract datasets, while on-demand contract and clause generation leveraging AI chat interfaces streamlines the drafting process while maintaining compliance. Many of these systems also offer AI enhanced or even controlled workflows that can adapt based on observed patterns, improving efficiency in contract routing and approvals based on assigned levels of authority. Additionally, integration capabilities with other business systems like CRM and billing platforms ensure seamless data flow across the organization, can be a tremendous benefit.

In addition to AI enhanced contract management software, or hopefully with the assistance of AI enhanced contract management software, in-house attorney’s evaluation of vendor and customer contracts should now include consideration and negotiation of AI use and transparency provisions, whether the AI use is by the organization, its vendors, its customers, or all. These considerations and negotiations should include specifications regarding data ownership, data uses, data privacy, data security, representations about compliance with data regulation, along with traditional consideration performance and liability limitations/transfers albeit with significant thought given to how use of AI changes these considerations for the specific organization. Further, in-house counsel will need to have an understanding of vendors’ and customers’ uses and requirements regarding AI which likely can only be obtained from significant due diligence during the contracting process.

AI relies on large amounts of data from often poorly, incompletely identified, or even misunderstood sources, and therefore in-house counsel must assist in building a significant data governance practice for the organization. This includes developing policies for data collection, storage, and use that comply with relevant privacy regulations.[ix] The involvement of an AI Governance Committee will facilitate the implementation of appropriate data protection measures, including data minimization, anonymization, and secure data handling practices. In assisting in-house counsel, the AI Governance Committee should also address issues of consent and transparency in AI-driven data processing both within the organization, and those with whom the organization interacts such as vendors and customers. As alluded to above, in most instances, the AI governance framework developed by the AI Governance Committee will need to differentiate between data used for internal and external applications and uses of AI. In sum, there will likely be four categories of the data: organization’s data which will be used solely within the organization; the organization’s data which may be sued outside the organization; data from outside the organization which will be used within the organization (often in conjunction with organization’s own data; and financial data from both outside and inside the organization that will be used for external purpose. Although governance rules regarding these four categories of data will overlap, each will have critical distinctions.

Recognizing that contracts with vendors and customers are ultimately simply sets of promises that the law allows the contracting parties to sue to enforce, each of these considerations arising from the use of AI must include very practical analysis of means by which enforcement can be economically accomplished. This enforcement aspect of contract management dovetails with risk management. Of course, whether discussing AI or traditional business issues, effective risk mitigation is not merely just about technical safeguards, but also requires a holistic approach that encompasses governance, policy, and culture. Establishing clear lines of accountability, developing comprehensive usage guidelines, and fostering a culture of responsible AI use throughout the organization will likely be critical aspects of risk mitigation. It also requires a dive into vendor relationships, subjecting AI providers to intense scrutiny to ensure vendor’s practices align with the organization’s risk tolerance and ethical standards.

As alluded to above, at the forefront of an organization’s risks is intellectual property. The opaque nature of many AI algorithms and the vast datasets they utilize create a scenario where proprietary information could be inadvertently incorporated into AI models or outputs.[x]

Thus, organizations must grapple with the possibility of inadvertently using copyrighted material or patented technologies in their AI-generated outputs, which is compounded by the differing and evolving jurisdictional approaches to data protection, intellectual property rights, privacy rights, and AI more generally. The AI Governance Committee will need to create rules or guardrails that include processes for determining the copyright status and patent implications of source materials used in AI training and generation. Obviously, this is a significant challenge given the often ambiguous nature of the data used to train the AI platforms themselves. Therefore, organizations must exercise caution and conduct strong due diligence before integrating AI technologies into their operations which, when possible, should include AI vendors providing performance guarantees.

Concurrently, protecting an organization’s own intellectual property and confidential information when using AI tools requires significant attention from the AI Governance Committee. The AI Governance Committee, including in-house counsel must develop strategies to prevent the unintentional disclosure of proprietary information to AI providers or through AI-generated outputs. This may involve implementing strict data segregation policies, establishing clear guidelines for employee use of AI tools, and negotiating robust confidentiality agreements with AI service providers.

Likewise, this should include having a monitoring system in place to identify issues early, coupled with well-rehearsed incident response plans that can be swiftly enacted. As with anything new to an organization, in many ways the organization may have difficulty fully appreciating what it does not know.[xi] Thus, the monitoring system itself must be audited and refined with regularity appropriate for rapidly developing and evolving technology. Security and compliance considerations should be included into every aspect of AI deployment. The reality will be that this effort requires a strong commitment to privacy by design principles, rigorous consent management, and transparency about AI use. The organizations must be amenable to explaining their AI systems in plain language, clearly label AI-generated content, and be upfront about how AI is being used in vendor and customer interactions.

Another issue to address in risk management arising from the use of AI is the potential for bias and discriminatory outcomes. News reports are rife with circumstances in which purport that AI systems, trained on historical data, have or could perpetuate or even amplify existing societal biases. Hence, when AI is used in decision making these issues could lead to unfair or illegal decision-making processes in areas such as hiring and advancement within the organization, and other uses within the organization performance of its business operations, including for customers. Attempt to avoid such basis, and showing such efforts, begin with a dive into the AI systems themselves, scrutinizing everything from the provenance of training data to the intricacies of model architecture.  The AI Governance Committee will likely need to set forth processes to scrutinize the organization’s use of AI for potential bias, such as implementing testing protocols and ongoing monitoring to ensure unintentional biases are not resulting from the use of the underlying data set, the model architecture, and decision-making processes.

A related issue is the potential AI platforms exposing private or sensitive information through their outputs or being vulnerable to adversarial attacks designed to extract protected data. Additionally, the use of AI for profiling and automated decision-making can raise significant privacy concerns, particularly when such decisions impact individuals’ rights or legal status.[xii] As such, consideration must be given to conducting an assessment on the AI use within the organization to identify potential privacy risks, evaluate the necessity and proportionality of data processing, and determine appropriate safeguards. Thus, thoughtful due diligence is a necessary step for the AI Governance Committee, and privacy issues should be addressed.

Risk mitigation should include a comprehensive set of disclaimers addressing the potential inaccuracies and limitations of AI-generated or AI-enhanced information, particularly when such information may be disseminated externally. The disclaimers should address whether and the extent to which the information or content may have been generated by or used AI technologies, which helps set appropriate expectations for the audience, including addressing accuracy and reliability. Consideration should also be given to whether there should be an instruction or suggestion of independent verification. Of course, disclaimers must be tailored to specific use cases.

The AI governance framework should encompass a clear set of ethical principles aligned with the organization’s mission and values, which often should address issues of fairness, transparency, accountability, privacy, and security. The principles will be in addition to the ethical obligations imposed on in-house counsel by the various bar organizations. For in-house counsel ethical obligations, each jurisdiction has, or will have, its own rules and requirements legal content generated by or with the assistance of AI, yet most will boil down to reliance on AI-generated work without verifying the existence and accuracies of case law violating the duty of diligent representation and candor to the court or other judicial body. Further, confidentiality issues may amount to a breach of attorney-client privilege very likely will arise from including confidential information into an AI platform. Although careful crafting of agreements with AI platforms which prohibits the platform’s use of an organization’s data are sometimes available, a subject that should cause an organization pause is that the AI platform’s agreeing to many such agreements, would appear likely to necessarily reduces the platform’s data set from which it is “learning” and thereby likely reduces the platform’s depth, breadth, and accuracy, over a period of time.

Endnotes:

[i] Portions of this paper were written with the assistance of generative artificial intelligence, including but not limited to ChatGPT and Perplexity.  Stated differently, the authors acknowledge the use of generative AI tools, including ChatGPT and Perplexity AI, in the development of this paper. These tools assisted with various portion of the drafting and idea exploration. All AI-generated content was thoroughly reviewed and edited by the authors, who take full responsibility for the final work. AI was not used for data analysis or core intellectual contributions.

[ii][ii] See Jamie Dimon’s Letter to Shareholders, Annual Report 2023 | JPMorgan Chase & Co. (JP Morgan Chase & Co “now includes more than 2,000 AI/machine learning (ML) experts and data scientists….”  with “over 400 use cases in production in areas such as marketing, fraud and risk — and they are increasingly driving real business value across our businesses and functions. We’re also exploring the potential that generative AI (GenAI) can unlock across a range of domains, most notably in software engineering, customer service and operations, as well as in general employee productivity. In the future, we envision GenAI helping us reimagine entire business workflows.”).

[iii] See Tim Fountaine, Brian McCarthy, and Tamin Saleh, AI and Machine Leaning, Building the AI-Powered Organization, Harvard Business Review, July-August 2019 (p.62-73) (advocating for a movement from siloed work to interdisciplinary collaboration or “cross-functional teams” to scale up and effectively use generative AI).

[iv] See Deloitte Legal, Generative AI: A guide for corporate legal departments (June 2023) (citing Goldman Sack’s estimate that AI has the potential to automated up to 44% of legal tasks as well as referencing a “Generative AI platforms prediction that within a year Generative AI will match the capabilities of a paralegal”). Since we have passed that one-year mark no later than June 2024, a judgment can be made as to the accuracy of that unidentified generative AI platform.

[v] See Jamie Dimon’s Letter to Shareholders, (“Recognizing the importance of AI to our business, we created a new position called Chief Data & Analytics Officer that sits on our Operating Committee…”).

[vi] See Foo Yun Chee, Martin Coulter and Supantha Mukherjee, Europe agrees landmark AI regulation deal (December 11, 2023, 10:29 AM CST), https://www.reuters.com/technology/stalled-eu-ai-act-talks-set-resume-2023-12-08/. See also California has considered several bills proposing requirements regarding the use of generative AI including AB 3048 seeks to amend the California Consumer Privacy Act to by requiring browsers to include settings allowing easily opt out of data collection and sharing; AB 3204 would introduce a new registration requirement with the California Privacy Protection Agency; SB 892, which would establish safety, privacy, and non-discrimination standards for AI systems used in public contracts; SB 896 seeks to increase transparency of government use of generative AI by mandating clear disclosures to users interacting with such systems. It also calls for risk evaluations of automated decision-making tools used by state agencies. Vendors in this space may need to adapt their offerings to facilitate these disclosures and assessments; AB 2930, seeks to require developers and users of automated decision-making technologies to perform conduct impact assessments and notify individuals affected by consequential decisions, as well as prohibit the use of algorithms shown to result in discrimination.

[vii] See Timonthy B. Lee and Sean Trott, A jargon-free explanation of how AI large language models work (July 31, 2023), https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/; Sean Trott, So you want to be an LLM-ologist? How to get started studying LLMs (8/20/2024) https://seantrott.substack.com/p/so-you-want-to-be-an-llm-ologist; IBM Data and AI Team, Understanding the different types of artificial intelligence (October 12, 2023), https://www.ibm.com/blog/understanding-the-different-types-of-artificial-intelligence/; Joyce Chai, What is Generative AI? What are Large Language Models (LLM)?, video, https://online.umich.edu/collections/artificial-intelligence/short/what-is-generative-ai-what-are-llm/; Helen Toner, What Are Generative AI, Large Language Models, and Foundation Models? (May 12, 2023), https://cset.georgetown.edu/article/what-are-generative-ai-large-language-models-and-foundation-models/.

[viii] See generally Sterling Miller, Generative AI; What in-house legal departments need to know. Thomson Reuters (November 30, 2023).

[ix] Id See Foo Yun Chee, Martin Coulter and Supantha Mukherjee, Europe agrees landmark AI regulation deal (December 11, 2023, 10:29 AM CST), https://www.reuters.com/technology/stalled-eu-ai-act-talks-set-resume-2023-12-08/. et al.

[x] See generally Bartz et al. v. Anthropic, PBC, Case No. 3:24CV05417, (N.D. Cal. filed Aug. 19, 2024) (the authors sued Anthropic for allegedly using pirated copies of their books and other copyrighted materials to train Claude asserting “[i]t is no exaggeration to say that Anthropic’s model seeks to profit from strip-mining the human expression and ingenuity behind each one of those works”); The New York Times v. Microsoft Corporation, Open AI, Inc. et al., Case No. 1:23-cv-11195 (S.D. N.Y filed December 27, 2023) (The New York Times sued OpenAI and Microsoft and others, alleging copyrighted news articles were used to train ChatGPT and other AI models without authorization which infringed on newspaper’s copyrights and competed with its journalism.); The Authors Guild, et al. v. Open AI, Inc., et al., Case No. 1:23-cv-8282-SHS (S.D. N.Y filed September 29, 2023) (These authors sued Open AI alleging OpenAI used their copyrighted books to train ChatGPT without authorization.) There are many others.

[xi] Donald Rumsfeld, famed February 12, 2002, statement appears apropos: “There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.”

[xii] See Isabel Gottlieb, By the Numbers: Six AI Questions for In-House Counsel in 2024 (January 2, 2024), https://news.bloomberglaw.com/business-and-practice/by-the-numbers-six-ai-questions-for-in-house-counsel-in-2024.