Winning AI Gold For In-House Legal Departments & Avoiding the Agony of Defeat

The recent explosion of attention on artificial intelligence resulting from platforms such as ChatGPT, Bard, DALL-E, and Midjourney’s use of large language models, has created tremendous excitement (and some fear) about possibilities for companies. We have all seen the stories about attorneys who filed a brief or supporting memo of law containing AI generated misrepresentations of law or wholly fictitious case law. Misuse of technological innovations by the lazy, unscrupulous, or similar overburdened attorney or others within the business world is hardly surprising. Nevertheless, we all likewise recognize that AI, when properly harnessed, has the ability – or will in short order – to improve attorneys’ productivity and each attorney’s work product quality on transactional work and in litigation (both on litigation’s front lines and those that supervise and need to understand the front lines of litigation). When we get to this improved quality and productivity, how we get there, who pays for it, who owns it, who reaps the financial rewards, and a myriad of other issues are just as uncertain as any other prognostication on the use of AI.

Yet In-house and outside counsel must seek to harness AI’s benefits while protecting themselves and their clients (internal and external) from the risk inherent with use – and misuse – of this rapidly developing tool and detriments that could coincide with the use of this emerging technology so that they can efficiently and effectively manage it and leverage it for the benefit of their corporate client and in-house legal department.

Primer on AI and Its History

AI as a concept has been mentioned throughout society since at least the 1950s. Early iterations of AI focused on developing very simple computer programs that could play simple games like checkers. In 1997, the AI chess supercomputer Deep Blue defeated Gary Kasparov in a televised chess match and it was the first time a computer has beaten a reigning champion.[i] More recently with constantly evolving computer power, the public has become obsessed with recent iterations of AI platforms that have become increasingly popular.

Recent developments for AI have been as much about computing power as they have been about new “technology.” Setting aside the ever-increasing computing power, the other primary driver is Large Language Model. Large Language Model are algorithms that are intended to summarize, translate, predict, and generate text to convey ideas and concepts. These models rely on incredibly large data sets in order to “feed” the algorithm to allow it to “learn” and essentially predict future outcomes based on past results.[ii]

In October of 2022, the White House published a “Blueprint for an AI Bill of Rights.”[iii]  The Whitehouse’s “Blueprint for an AI Bill of Rights is a set of five principles and associated practices to help guide the design, use, and deployment of automated systems to protect the rights of the American public in the age of artificial intelligence.”[iv] The five principles in the Blueprint for an AI Bill of Rights are: Safe and Effective Systems; Algorithmic Discrimination Protections; Data Privacy; Notice and Explanation; and Human Alternatives, Consideration and Fallback.

Recently, the European Union reached a provisional agreement on a landmark act referred to as the Artificial Intelligence Act that is intended to govern the use of AI in EU member countries. “The accord requires foundation models such as ChatGPT and general-purpose AI systems (GPAI) to comply with transparency obligations before they are put on the market. These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.”[v] The EU AI act takes a risk-based approach that varies depending on whether a use case of AI is considered minimal/no-risk, limited-risk, high-risk, and unacceptable-risk. Depending on the categorization, certain use cases are barely complete while others must comply with stringent disclosure requirements or risk substantial fines or exposure to individual citizen complaints.

The purpose of this seminar is to provide insight into opportunities for in-house counsel to leverage AI to improve quality and productivity. This seminar will focus our discussion of AI on two primary areas: AI for in-house counsel and intellectual property implications.

Types of AI – what they are, what they do, how they can be leveraged and the potential pitfalls

There are 3 general categories of recognized AI – Narrow AI, General AI and Super AI. Currently, the only AI that exists is what is known as Narrow AI and the other two categories are theoretical concepts. Narrow AI includes several different types based on functionality. The two primary types are briefly explained below:

Reactive Machine AI

Reactive Machine AI is designed to perform a single task with no memory. This type of AI performs its task using an existing data set and does not store its previous decisions. Common examples that people experience (likely without realizing) would be Spotify or Netflix recommendations to users (i.e. analyzing what a user has watched and recommending different material based on that existing data set). The original AI programs designed to play games would also fall under this category because its decision making is based on the currently available pieces on a board.

Limited Memory AI

Limited Memory AI is “AI [that] can recall past events and outcomes and monitor specific objects or situations over time. Limited Memory AI can use past- and present-moment data to decide on a course of action most likely to help achieve a desired outcome.”[vi]

Generative AI

Within Limited Memory AI is Generative AI which is what you currently see most in the news. Generally speaking, generative AI is a type of artificial intelligence that can produce outputs (text, audio, video, code, images, etc.) based on prompts. The most common examples of this are ChatGPT, Dall-E, and Microsoft Copilot.

In order to address the legal issues that arise from AI, we need to first consider the possible uses within a company and/or in-house legal department in order to identify and discuss the potential risks. With the growth of AI technologies, the applications have become innumerable and are constantly changing.

Business and Legal In-House Uses for AI

The evolution and proliferation of AI products has spurred countless new companies that are focusing on the newest AI product to sell to businesses. The potential applications for AI seem limitless but there appear to be dominant categories that are already seeing early adoption within corporate environments and in-house legal groups.

Document Review and Management and E-Discovery

AI offers notable benefits for summarizing documents and emails, catering to the increasing need for efficient information processing in various professional settings. One primary advantage is the time-saving aspect. AI-driven algorithms can rapidly analyze and distill large volumes of textual information into concise and coherent summaries. This expedites the review process for professionals who deal with extensive documentation regularly, allowing them to focus on critical details without being overwhelmed by information overload.

Furthermore, AI-powered summarization tools contribute to improved productivity and decision-making. By extracting key insights and summarizing the main points of documents and emails, these tools help users quickly grasp the essence of the content. This is particularly valuable in scenarios where time-sensitive decisions need to be made, such as in business negotiations, legal reviews, or project management. AI-generated summaries enable professionals to efficiently prioritize their tasks and allocate their attention to areas that require immediate action or consideration. Additionally, these tools aid in maintaining a clear and consistent understanding of the information across team members, fostering effective communication and collaboration in the workplace.

AI has become a game-changer in the realm of e-discovery, offering significant advantages for legal professionals tasked with sifting through vast amounts of electronic data during legal proceedings. One of the primary benefits is the efficiency gained in the document review process. AI-powered tools can swiftly analyze and categorize massive datasets, automatically identifying relevant documents, key concepts, and patterns. This capability drastically reduces the time and human effort traditionally required for manual document review, enabling legal teams to expedite e-discovery processes and meet tight deadlines more effectively.

Another crucial advantage of employing AI in e-discovery is the enhancement of accuracy and consistency. Machine learning algorithms can learn from human reviewer decisions and continuously improve their understanding of case-specific criteria. This iterative learning process ensures that AI systems become increasingly adept at recognizing relevant information and potential legal risks. By minimizing the risk of oversight and human error, AI-powered e-discovery tools contribute to the production of more thorough and reliable results, bolstering the overall quality and defensibility of the legal case. Ultimately, the integration of AI in e-discovery not only accelerates the pace of legal proceedings but also elevates the precision and reliability of the information uncovered during the process.

Contract Drafting

Many of the generative AI platforms have the ability to draft contract documents for a user based on the prompts that are input into the platform. There are a number of companies that have spun up to directly support this function.

AI has emerged as a potential powerful tool in the realm of contract drafting, and has the opportunity to significantly expedite and improve the accuracy of the contract drafting process. AI has the capacity to quickly analyze large datasets of communications and prior legal documents in order to identify and extract patterns and incorporate pertinent contractual provisions and language and generate drafts automatically. This has the potential to not only accelerate the drafting process but also to ensure consistency and compliance with legal standards. AI-based contract drafting tools can assist legal teams in identifying potential risks, ensuring that contracts align with regulatory requirements, and reducing the likelihood of errors that may lead to legal disputes.

More specifically, AI ability to assist in contract drafting could extend beyond automation and advanced natural language processing has the potential to enable AI systems to comprehend and interpret complex legal language, making them adept at identifying nuanced contractual clauses and potential pitfalls. These systems can also learn from past contracts and negotiations, providing insights to enhance negotiation strategies and improve future contract terms. Overall, AI in contract drafting has the ability to not only enhance efficiency but also contribute to the creation of more robust, legally sound agreements, empowering organizations to navigate the complexities of contractual relationships with greater confidence.[vii]

Legal Research

Another area that has seen early adoption is the use of AI for purposes of legal research. Specifically, the largest companies in the space have created platforms (e.g. WestLaw Edge and WestLaw Precision and Lexis+ AI) that claim to leverage AI technology in order to assist users with legal research. This includes natural search language (more along the lines of searching in Google than previous terms and connectors searching that was utilized in traditional research programs). Additionally, these platforms are designed to help users find pertinent case law more efficiently. These platforms also have the ability to cite check briefs and summarize the cited law both for relevance and accuracy. They even have the ability to suggest alternative citations when the platform believes there is stronger case law to support certain propositions.

One of the issues firms face is the additional costs these features add beyond the already expensive subscription fees that firms pay for legal research platforms. Moreover, many clients will not pay for portions of the firms’ subscription fees for electronic research. Beyond the associated costs, attorneys must still review the cited case law to ensure that the AI platform’s suggestions are accurate to comply with their ethical obligation related to court filings. This could increase the time required to prepare a brief. These issues lead to an obvious cost/benefit analysis that firms and clients have to review to determine whether this added technology truly increases efficiency.

Customer Interactions (Chat Bots/Virtual Assistants)

Over are the days of calling a call center staffed with hundreds of people. Generative AI now allows for the complete automation of customer interactions. AI platforms now provide companies the ability to interact directly with customers and answer their questions in real-time via written prompts but also by video interactions. This technology has advanced to the point where it is virtually indiscernible from interacting with a human being.

In fact, the technology has advanced so far that it is possible to mimic individuals to such a convincing degree that it is being used for cyber scams. Most recently, a finance worker was tricked into transferring $25 million dollars after a video conference call with several of the workers co-workers and an alleged CFO from a different branch of the company.[viii]

Use of AI in Employment Decision Making

Numerous AI products proclaim that they can replace a company’s HR team by ingesting huge amounts of job applicants’ submitted information. One avenue that employers have recognized benefits in using AI is for pre-employment decision making on job applicants. AI claims that it offers innovative solutions to streamline processes, enhance decision-making, and improve overall efficiency within the HR domain. One significant application of AI in HR is talent acquisition. AI-powered tools can analyze vast amounts of data to identify suitable candidates, assess resumes, and even conduct initial candidate screenings. This not only saves time for HR professionals but also ensures a more objective and data-driven approach to hiring, minimizing biases and improving the quality of talent selection.

Another crucial aspect of AI in HR is employee engagement and retention. AI tools can analyze employee data, such as performance metrics, feedback, and even sentiment analysis from communication channels, to identify patterns and predict potential issues. This enables HR teams to proactively address concerns, provide personalized development opportunities, and enhance the overall employee experience. Additionally, AI-driven chatbots and virtual assistants can handle routine HR inquiries, allowing HR professionals to focus on more strategic tasks.

Furthermore, AI plays a vital role in learning and development within organizations. AI-powered systems can assess employee skills, preferences, and learning styles to deliver personalized training programs. This ensures that employees receive tailored learning experiences, fostering continuous professional development and skill enhancement. By leveraging AI for HR functions, organizations can not only streamline processes but also make more informed, data-driven decisions that contribute to the overall success and growth of the workforce.[ix]

Generating Company Intellectual Property

The most popular and recent widely publicized consumer use of AI are the generative platforms that you see creating outputs based on a prompt entered by the user. Photographs, short stories, company logos or a multitude of other options can be created quickly and simply by typing a prompt into the platform and AI will create it for you within a matter of seconds. But, if businesses are utilizing this technology for commercial purposes, it can create significant headaches for in-house counsel down the road.

Even more, these same AI platforms are now being sued for violation of intellectual property rights of others because of the way the platforms train their algorithms to create content. These intellectual property specific issues have caused agencies to provide guidance and could eventually spur new laws changing the current landscape (which very well could happen by the time you see this presentation).

Intellectual property at its core is rooted in our constitution.  Article 1, Section 8, Clause 8 (commonly referred to as the Intellectual Property Clause) of the United States Constitution provides: “To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.” This section of the Constitution empowered Congress to protect inventions and works of art and eventually led to the establishment of the United States Patent and Trademark Office (USPTO) and the United States Copyright Office.[x]

Legal Risks Related to the Potential Uses of AI and How to Limit Risk

Employment Discrimination

Several states have already enacted laws barring companies from using AI to evaluate job applicants. Specifically, Illinois and New York City (with proposed laws in New York, New Jersey and Maryland) have passed laws essentially banning use of AI in pre-employment screening. The primary concerns these laws are attempting to address is bias that may be baked into the AI tool and its algorithms being used to screen potential employees and ensuring that potential employees are treated in an unbiased manner.[xi]

The problem lies in the idea that if AI systems are trained on historical data and that historical data reflects biases present in the workforce then there would inevitably be a risk that those biases would be reflected in the employment decisions proposed by the AI platform. This could lead to discriminatory outcomes based on factors such as gender, race, or other protected characteristics, raising concerns about compliance with anti-discrimination laws.

Transparency and explainability of AI algorithms pose another legal risk. Many AI models, particularly complex ones like deep learning neural networks, operate as “black boxes,” making it challenging to understand how they arrive at specific decisions. In employment matters, where decisions may have significant consequences for individuals, lack of transparency can raise concerns about fairness and accountability. Employees have the right to understand the basis of decisions affecting them, and opaque AI systems may pose challenges in meeting this expectation. The next step would be a potential discrimination claim based on an employment decision that was based on AI analysis which then cannot be adequately explained which would leave an employer potentially exposed to liability.

In the EU at the intersection of employment law and data protection law according to Art. 22 GDPR, there is a fundamental ban on “profiling”, according to which data subjects are protected from an exclusively AI-based decision (decisions on hiring, dismissals, warnings and promotions as well as the preparation of the documents required for this, such as acceptances or rejections of applicants, employment contracts, notices of termination, etc.). AI is and may only be used in HR work to make decision recommendations, but the final decision must be made by a human being, as AI is prone to errors.

Data Privacy and Cybersecurity issues

There are significant data privacy risks that can arise from using AI. These risks vary but primarily arise from two things: what information is being shared with AI and how that AI is being used with that information.  To further complicate matters, these concerns vary significantly depending on where in the world your data arises and how you are using the AI platform.

In the US, there is no singular, universally applicable data privacy law (yet) but certain industries have statutes in place and/or are starting to see regulatory frameworks that would govern individual privacy and company’s obligations related to privacy. Nearly every state at this point has some sort of data privacy statute but those vary widely and some have robust requirements and protections in place while others have very little.

The EU recently reached an agreement on the Artificial Intelligence Act (“AI Act”) and it is being celebrated as being as innovative as the technology it attempts to regulate. But taking a closer look, its shortcomings become obvious, and the already existing law has to fill the void. Recognizing the potential threat to people’s safety, livelihoods, citizens’ rights, and democracy posed by certain applications of AI, the co-legislators agreed to prohibit:

  1. biometric categorization systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
  2. untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  3. emotion recognition in the workplace and educational institutions;
  4. social scoring based on social behavior or personal characteristics;
  5. AI systems that manipulate human behavior to circumvent their free will;
  6. AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation)

Some exceptions may be allowed for law enforcement purposes. “Real-time” remote biometric identification systems will be allowed in a limited number of serious cases, while “post” remote biometric identification systems, where identification occurs after a significant delay, will be allowed to prosecute serious crimes and only after court approval.

Obligations also do not apply to research, development and prototyping activities preceding the release on the market, and the regulation furthermore does not apply to AI systems that are exclusively for military, defense, or national security purposes, regardless of the type of entity carrying out those activities. For AI systems classified as high-risk due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law, clear obligations were agreed. MEPs successfully managed to include a mandatory fundamental rights impact assessment, among other requirements, applicable also to the insurance and banking sectors. AI systems used to influence the outcome of elections and voter behavior, are also classified as high-risk. Citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights.

Copyright Issues

The proliferation of generative AI has led to rapid evolvement at the Copyright Office and a new line of copyright guidance. The copyright office has determined that AI cannot be considered an “author” for purposes of copyright registration and rooted this opinion in the text of the Constitution and the Copyright Act.

It begins by asking “whether the ‘work’ is basically one of human authorship, with the computer [or other device] merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.” In the case of works containing AI-generated material, the Office will consider whether the AI contributions are the result of “mechanical reproduction” or instead of an author’s “own original mental conception, to which [the author] gave visible form.”[xii]

The Copyright Office provided an example and opined that “when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the ‘traditional elements of authorship’ are determined and executed by the technology—not the human user.”[xiii]

Additionally, these emerging legal issues have given rise to numerous copyright infringement suits where authors are asserting claims against generative AI platforms for allegedly infringing on their copyrighted works.[xiv] The argument in these instances is that the AI platforms infringed the copyrighted works when the AI creator “trained” the AI platforms using the copyrighted works. Essentially, the copyright holders are claiming that the AI products use their works to make the AI platforms better, and in some instances, the AI platforms will regurgitate the copyrighted works when asked certain prompts by the users.

Fundamentally, these cases will center on what constitutes “fair use” for copyright purposes. “Under the fair use doctrine of the U.S. copyright statute, it is permissible to use limited portions of a work including quotes, for purposes such as commentary, criticism, news reporting, and scholarly reports. There are no legal rules permitting the use of a specific number of words, a certain number of musical notes, or percentage of a work. Whether a particular use qualifies as fair use depends on all the circumstances.”[xv]

Patent Issues

It is worth briefly noting that the USPTO has determined that the number of AI-related patent applications increased from less than 10,000 annually in 2005 to almost 80,000 AI-related patent applications in 2020.[xvi] This explosion in volume of AI patent applications has obviously resulted in an increasing number of granted AI patents (which the USPTO estimated to be as many as approximately 450,000 in total as of 2020[xvii]) but it has also raised interesting issues that the USPTO has had to review and address.

Similar to how the Copyright Office has treated AI-generated works for purposes of copyright registration, the USPTO has reached a similar conclusion and provided guidance that AI cannot be considered an “inventor” for purpose of patent protection. The Patent Office’s conclusion is bolstered by the holding of the United States Court of Appeals for the Federal Circuit in the 2022 case of Thaler v. Vidal.[xviii] This case centered on “the question of who, or what, can be an inventor” and whether an AI program can be listed as an inventor on a patent application filed with the USPTO.[xix] The plaintiff in this action was an individual who created an AI system. That individual claimed that the AI system created two new inventions and he filed two patent applications related to the AI created inventions.[xx] Each of the patent applications listed the AI system as the sole inventor. The USPTO concluded that the applications lacked an inventor and requested that the plaintiff identify the valid inventor(s). The plaintiff contested the notice and then sought judicial review.[xxi]

The federal district court sided with the USPTO and entered summary judgment in its favor finding that an inventor has to be a “individual” under the Patent Act and the plain meaning as used in the statute is a natural person.[xxii] The United States Court of Appeals for the Federal Circuit agreed with the district court and concluded that for purposes of patent protection, an inventor has to be an individual and that “Congress has determined that only a natural person can be an inventor, so AI cannot be.”[xxiii]

How to effectively Manage the Use and Risks of AI in the Workplace

The first, and most obvious, step for any in-house counsel is to ensure that they understand how their business client is using (or wanting to use) AI in business operations.

Collaboration with Stakeholders

The most obvious and hopefully the most utilized way to head off issues before they occur is early and frequent collaboration with business stakeholders. This critical effort is an absolute necessity when considering the potential benefits and detriments of using AI in the workplace. This obviously poses difficulties for in-house counsel as well as their outside counterparts as the scale of a business increases. Finding ways to be top-of-mind with business users to ensure that there is open and frank communication about what emerging technologies are being used and how they are being used will ultimately allow in-house counsel to determine the appropriate legal guidance necessary to ensure that what legal risk exist that they are appropriately addressed and mitigated.

Contract Provisions

Another way to address the potential concerns regarding privacy and data/cybersecurity head on is by negotiating directly with the AI providers and considering their “enterprise” solutions. Admittedly, the current focus of many companies is solely on subscriptions, which are not subject to contract negotiations. Several AI providers have begun to recognize the concerns businesses may face and have created off-shoot products targeted toward corporations. Specifically, ChatGPT now offers an “Enterprise” product where customer prompts and data are not stored or used for their training models and include increased levels of data encryption and SOC 2 cybersecurity compliance.

On-Premises Solutions

Companies have begun building their own AI platforms and hosting them on premise. On premise means that the company is hosting the AI platform on its own assets (local servers) that are owned and managed by the company and access is limited to only individuals within the company. This gives companies great control over privacy concerns but at a significant monetary and administrative cost. The obvious build-out costs, including both time and assets to create the system, are significantly higher than it would be for a platform that is cloud-based. Additionally, once an on-premises solution is created, it still has to be maintained, which incurs even more time and cost.

Adapting employment contracts

The employer can prohibit employees from using AI by virtue of its right to issue instructions. The right to issue instructions includes the question of which work equipment employees may or must use. To avoid ambiguity regarding the permissibility of using AI, the employer should establish clear rules in this regard and, if the use of AI is desired by the employer, which Ais may be used. Furthermore, any work done by AI should be labelled as such.

Conclusion

Ultimately, the benefits that AI offers are industry changing but also come with a slew of potential detriment and legal headaches for in-house counsel and management to analyze and mitigate. There are a number of ways to mitigate the risks incorporated.  The implementation of an internal AI guideline may be advisable for companies using and working with AI, covering the points:

  1. Copyrights and licenses (It has proven helpful to explicitly name the providers to be used by employees in the field of artificial intelligence.)
  2. data protection
  3. handling of business secrets and confidential information
  4. labelling obligations and transparency
  5. training and awareness-raising
  6. monitoring and enforcement

[i] Rockwell, Anyoha, The History of Artificial Intelligence (August 28, 2017), https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/

[ii] For digestible explanation consider: Timonthy B. Lee and Sean Trott, A jargon-free explanation of how AI large language models work (7/31/2023), https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/.

[iii] Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, Whitehouse.Gov (October 2022), https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf

[iv] Id.

[v] Foo Yun Chee, Martin Coulter and Supantha Mukherjee, Europe agrees landmark AI regulation deal (December 11, 2023, 10:29 AM CST), https://www.reuters.com/technology/stalled-eu-ai-act-talks-set-resume-2023-12-08/

[vi] IBM Data and AI Team, Understanding the different types of artificial intelligence (October 12, 2023), https://www.ibm.com/blog/understanding-the-different-types-of-artificial-intelligence/.

[vii] Portions of this section were drafted by Open AI’s ChatGPT platform and modified by the authors of this paper.

[viii] Heather Chen and Kathleen Magramo, Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’ (February 4, 2024, 2:31 AM EST), https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html.

[ix] Portions of this section was drafted by Open AI’s ChatGPT platform and modified by the paper’s authors.

[x] Milestones in U.S. Patenting, United States Patent and Trademark Office website – USPTO.gov. https://www.uspto.gov/patents/milestones (last visited February 12, 2024).

[xi] Alonzo Martinez, Balancing Innovation And Compliance: Navigating The Legal Landscape Of AI In Employment Decisions (October 31, 2023, 6:54 EDT), https://www.forbes.com/sites/alonzomartinez/2023/10/31/balancing-innovation-and-compliance-navigating-the-legal-landscape-of-ai-in-employment-decisions/?sh=75311382da2f.

[xii] Copyright Registration Guidance: Works Containing Materials Generated by Artificial Intelligence, United States Copyright Office, p.4, https://www.copyright.gov/ai/ai_policy_guidance.pdf (quotations omitted) (last visited February 12, 2024).

[xiii] Id.

[xiv] Matt O’Brien, ChatGPT-maker braces for fight with New York Times and authors on ‘fair use’ of copyrighted works (January 10, 2024, 3:05 PM CST), https://apnews.com/article/openai-new-york-times-chatgpt-lawsuit-grisham-nyt-69f78c404ace42c0070fdfb9dd4caeb7; Matt O’Brien, Sarah Silverman and novelists sue ChatGPT-maker OpenAI for ingesting their books (July 12, 2023, 1:56 PM CST), https://apnews.com/article/sarah-silverman-suing-chatgpt-openai-ai-8927025139a8151e26053249d1aeec20; Jocelyn Noveck and Matt O’Brien, Visual artists fight back against AI companies for repurposing their work (August 31, 2023, 1:55 PM CST), https://apnews.com/article/artists-ai-image-generators-stable-diffusion-midjourney-7ebcb6e6ddca3f165a3065c70ce85904

[xv] Can I Use Someone Else’s Work? Can Someone Else Use Mine?, United States Copyright Office, https://www.copyright.gov/help/faq/faq-fairuse.html#:~:text=Under%20the%20fair%20use%20doctrine,news%20reporting%2C%20and%20scholarly%20reports (last visited February 12, 2024).

[xvi] Artificial Intelligence (AI) Trends in U.S. Patents, United States Patent and Trademark Office website – USPTO.gov, https://www.uspto.gov/sites/default/files/documents/Artificial-Intelligence-trends-in-U.S.-patents.pdf (last visited February 12, 2024).

[xvii] Please note this number includes the total number of all granted AI U.S. patents from 1976-2020.

[xviii] Thaler v. Vidal, 43 F.4th 1207, 1209 (Fed. Cir. 2022), cert. denied, Thaler v. Vidal, 143 S. Ct. 1783, 215 L. Ed. 2d 671 (2023).

[xix] Id. at 1209.

[xx] Id. at 1210.

[xxi] Id.

[xxii] Id.

[xxiii] Id. at 1213.