Artificial Intelligence and You: Perfect Together?

Once little more than science fiction, products based on artificial intelligence (“AI”) have found their way into many aspects of our daily lives. Self-driving, autonomous vehicles are on the roads in certain parts of the country.  Millions of Americans use and rely on Fitbits and similar products to register and track biometric data and make recommendations for fitness, nutrition and health decisions.  Robotics are finding their way into many aspects of product manufacturing and the medical field.  As with many scientific and technological advances, government regulation and legal doctrines tend to be outdated and slow to catch up.   As the use of AI continues to expand, questions arise concerning the extent to which such technology should be regulated.  Furthermore, when AI fails or causes injuries, there are unanswered questions as to whether liability exists for such injuries, who bears liability, and under what legal theories.

What is AI?

AI is broadly defined as computer systems and programs that perform tasks that normally require human decision-making and intelligence.  AI systems use technology, typically algorithms and neural networks, that when combined with computer programs, accomplish specific tasks by recognizing and processing data.  There are generally two forms of AI that exist and operate in today’s world: reactive AI and limited memory AI.

Reactive AI is the original and most basic form of AI.  Reactive AI systems are closed systems programmed to provide a predictable outcome based on the input that it receives.  They always respond to identical data in the same exact way every time.  Reactive AI systems are unable to learn from data they receive over time to improve their reliability and performance.  Reactive AI systems are very prevalent but are inherently limited in functionality because they remain in the same state after they leave the developers’ hands. 

Unlike reactive AI, limited memory AI (also referred to s machine learning) is a more sophisticated form of AI.  Limited memory AI is capable of retaining and leaning from data that it receives to improve its functionality.  Limited memory AI systems build a knowledge base that they use to make predictions and perform complex classification tasks.  In the legal field, lawyers use technology assisted review to distinguish between responsive and non-responsive documents. For example, self-driving cars receive, store and interpret data from sensors that allow the vehicle to safely navigate the roads. Limited memory AI is the most widely used form of AI in our current world.  Limited memory AI systems present challenges to the legal system because they change over time.

Existing Regulations

There are currently no comprehensive federal statutes or regulations that govern the design and use of AI technology.  To date, the federal government has addressed AI issues on an agency-by-agency basis.  For example, in 2016, Congress passed the 21st Century Cures Act (“Cures Act”), which among other things, regulates AI in the medical field.  The Cures Act amended the Food, Drug and Cosmetic Act to clarify the extent to which FDA regulates medical software as a medical device.  The Cures Act amended the FDCA to include the following software functions within the definition of a medical device:

Software functions that are “intended to acquire, process, or analyze a medical image or signal from an invitro diagnostic device or a pattern or signal from a signal acquisition system, for the purpose of:

“supporting or providing recommendations to a healthcare professional about prevention, diagnosis, or treatment of a disease or condition;” or

“enabling such healthcare professional to independently review the basis for such recommendations that such software presents so that it is not the intent that such healthcare professional rely primarily on any of such recommendations to make a clinical diagnosis or treatment decision regarding an individual patient.”

See 21 U.S.C. § 360j(o)(1)(E)(ii) and (iii).  Based on the authority provided by the Cures Act, the FDA has adopted the following definition of AI:

“A device or product that can imitate intelligent behavior or mimics human learning and reasoning. Artificial intelligence includes machine learning, neural networks, and natural language processing. Some terms used to describe artificial intelligence include: computer-aided detection/diagnosis, statistical learning, deep learning, or smart algorithms.”

See www.fda.gov/medical-devices/digital-health-center-excellence/digital-health-criteria.

Since the Cures Act was enacted, the FDA has issued two documents setting forth its foundational approach to the pre-market review of medical AI software and medical devices containing AI.  In April 2019, the FDA issued a Discussion Paper and Request for Feedback entitled “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning [AI/ML]-Based Software as a Medical Device [SaMD].”  In January 2021, the FDA published with its “Artificial Intelligence/Machine Learning [AI/ML]-Based Software as a Medical Device [SaMD] Action Plan” based on feedback it received in response to the Discussion Paper.  Both documents expressed FDA’s intention to foster safe and effective innovations in medical software by focusing on industry transparency and real-world performance monitoring.

Of the various federal regulatory agencies, the FDA is the most advanced in terms of regulating AI.  Other federal agencies, however, have been tasked with developing AI regulations and standards.  With respect to autonomous vehicles, the National Highway Traffic Safety Administration (NHTSA) released for public comment an advanced notice of rulemaking entitled “Framework for Automated Driving System Safety”  in November 2020.  See 85 Fed Reg. 78058 (Dec. 3, 2020).  In January 2021, the Department of Transportation also released its comprehensive plan entitled “Automated Vehicles.”  See www.transportation.gov/av/avcp/5.  Also in 2021, Congress passed the National Defense Authorization Act, which directed the National Institute of Standards and Technology (NIST), a laboratory and agency within the Department of Commerce, to develop “a voluntary risk management framework for trustworthy AI systems.”  In July 2021, NIST issued a Request for Information soliciting input to develop a risk management guidance related to AI.  See www.nist.gov/news-events/news/2021/07/nist-requests-information-help-develop-ai-risk-management-framework. In September 2021, the Department of Commerce also announced formation of the National Artificial Intelligence Advisory Committee to work with the National AI Initiative Office of the White House and other government agencies to provide recommendations on various issues affecting AI. See www.commerce.gov/news/press-releases/2021/09/department-commerce-establishes-national-artificial-intelligence.

In addition to the United States, the European Union (EU) has taken steps to regulate the use of AI.  In April 2021, the European Commission (EC) published its “Proposal for a Regulation Laying Down Harmonized Rules on Artificial Intelligence.” See www.digital-strategy.ec.europa.eu/en/library/communication-fostering-european-approach-artificial-intelligence.  Unlike the U.S.’s agency-based approach, the EU’s AI regulations, if adopted, govern the development, placement and use of all AI systems in the EU.  The EU takes a risk-based approach to AI systems.  The EU regulations would impose varying levels of regulation on AI systems depending on whether they are classified as high risk, limited risk, or minimal risk.  In addition, the EU regulation prohibits the use of AI technology for certain applications that implicate personal privacy concerns.  The EC’s proposal was also accompanied by proposed changes to the EU’s Machinery Directive to address gaps in that directive and ensure the safe integration of AI systems into machinery. See  www.europeansources.info/record/proposal-for-a-regulation-on-machinery-products/.  These regulations have the potential to impact the sale of machines containing AI in the EU.

Potential Legal Implications

As injuries occur either because of alleged flaws or defects in the design or development of the AI system, or how they are utilized in real world scenarios, questions will arise concerning whether the company that developed the AI has potential liability, and if so, under what possible legal theories.  One topic that is often debated is whether AI systems are governed by product liability laws.  Typically, product liability laws impose strict liability on product manufacturers but do not apply to providers of services.  To date, whether AI systems and other forms of computer software are a product or service for purposes of product liability law is still largely an unanswered question.

Many states’ product liability laws do not define “product.”  The New Jersey Product Liability Act, for example, does not define what qualifies as a product.  When a state’s law does not define what constitutes a product, the courts may look to the Restatement for guidance.  The Restatement (Third) of Torts, § 19 defines product as “tangible personal property distributed commercially for use of consumption.”  Section 19 states that certain intangible items, like electricity for example, may constitute a product “when the context of their distribution and use is sufficiently analogous to the distribution and use of tangible personal property.”  The commentary to Section 19, although recognizing that there is a dearth of case law on the issue, suggests that computer software should be treated as a product, primarily because cases have frequently held that mass-produced computer software is a good rather than a service for purposes of the Uniform Commercial Code.  See Restatement (Third) of Torts, § 19, cmt. d.

Very few reported cases have addressed whether computer software is a product for product liability purposes, and those that exist present a mixed bag.  Where the software is incorporated into a tangible product, like a vehicle or machine, courts are more likely to find that plaintiff’s claims are governed by product liability laws.  In Flynn v. Am. Honda Motor Co., No. 4:11-cv-3908, 2015 U.S. Dist. LEXIS1016 (S.D. Tex. Jan. 6, 2015), plaintiff alleged that the defendant should have used a safer algorithm that signaled when the vehicle’s air bags should deploy.  Plaintiff’s claim alleged design defect, but the court dismissed the claim because plaintiff did not present evidence demonstrating the cost of the proposed alternative algorithm.  Id. at *10-14.  In Holbrook v. Prodomax Automation Ltd., No. 1:17-cv-219, 2021 U.S. Dist. LEXIS 178325 (W.D. Mich. Sep. 20, 2021), plaintiff brought negligence claims against the designer and manufacturer of software that automated an assembly line that allegedly caused plaintiff’s death.  The defendant moved to dismiss plaintiff’s negligence claim, arguing that its software was a product and that plaintiff’s claims were limited to those available under Michigan’s product liability statute.  The court agreed with the defendant and held that its software was either a product itself, or a component part of the assembly line.  Id. at *14-16.

On the other side of the spectrum is Rodgers v. Christie, 795 Fed. Appx.  878 (3d Cir. 2020).  In Rodgers, plaintiff alleged that a Public Safety Assessment (“PSA”), a multifactor risk assessment model created by the defendant and utilized by courts in determining whether to grant bail to pre-trial detainees, was a defective product.  In upholding the trial court’s decision that plaintiff could not pursue a product liability claim, the court held that the PSA did not fit the Restatement’s definition of a product for two reasons.  First, the court held that the PSA was not distributed commercially.  Second, the court stated that the PSA was an algorithm or formula and therefore, not tangible property or something analogous to it.  The court explained that the PSA was more akin to “information, guidance, ideas and recommendations,” which are not products under the Restatement because they raise First Amendment concerns.  Id. at 880.

Whether an AI system constitutes a product or service will likely impact both the nature of the claims that are asserted against the manufacturer, and the defenses it may have at its disposal.  If AI systems are governed by product liability laws, plaintiffs typically assert claims for manufacturing defect, design defect or failure to warn.  In the context of AI, design defect claims are likely to predominate. Depending on the jurisdiction and the specific product at issue, to establish a design defect claim, plaintiff would either need to rely on the consumer expectation test to establish that the product is unreasonable dangerous, or the risk/utility test, which typically requires proof of a safer alternative design.  Because most AI systems are designed to evolve and adapt through real-world application, in many cases, the AI system that allegedly injured plaintiff may be significantly different than when it left the developer’s hands.  In product liability cases, the product’s design is evaluated based on the condition it was in when it left the manufacturer’s hands.  If AI is determined to be a service rather than a product, plaintiffs will likely be limited to asserting other negligence-based claims against the AI developer.  In negligence-based claims, the focus is typically on whether the company’s conduct was reasonable rather than whether there were defects or flaws in the AI system. 

Time will tell how current principles are applied to AI that allegedly causes harm.