Updates

Artificial Intelligence Law and Policy Roundup

This article provides an overview of the current AI law and policy landscape in the U.S. by illustrating how government entities are working to foster innovation while also implementing safeguards to mitigate potential harms caused by unrestricted use of AI technologies.

As artificial intelligence (“AI”) has moved from the realm of science fiction to the world of business, companies and regulators alike have grappled with the opportunities and risks that AI presents. Today, AI-powered applications, from chatbots to delivery drones, have changed the way businesses operate and interact with their customers. At the same time, the increasing use of AI technologies to facilitate business decisions and outcomes in areas such as employment, health care, housing and finance has prompted lawmakers and regulators at both the state and federal levels to evaluate whether and how to regulate AI to ensure that it is employed responsibly and in a manner that serves the public interest. Although AI technologies have been in existence for decades, regulations are still catching up to this reality.

What is AI?

To understand the regulatory environment surrounding AI, it is important to recognize what AI is and how both human input and machine operations play intersectional roles in its development and performance. AI is commonly understood as technology that can simulate human (intelligent) learning and decision making. AI applications employ algorithmic models that receive and process large amounts of data and are trained to recognize patterns, thus enabling the applications to automate repetitive functions as well as make judgments and predictions. The selection of training data, as well as other training decisions, is human controlled. However, as AI becomes more sophisticated, the computer itself becomes capable of processing and evaluating data beyond programmed algorithms through contextualized inference, creating a “black box” effect where programmers may not have visibility into the rationale of AI output or the data components that contributed to that output.

AI across industries

While AI may seem like an abstract technical concept, AI systems pervade our daily lives. Every time we engage with a chatbot, use autocorrect functions in messaging systems, interact with virtual assistant technologies such as Amazon Echo or Alexa, rely on smart car features such as parking assist, or depend on logical inferencing in tax preparation programs, we are enjoying the benefits of AI.

Similarly, AI technologies play critical roles across many industries. For example, businesses use AI to streamline employment application screenings and replace workers in certain capacities. In the real estate sector, AI is being used to determine the loan amount offered to a particular mortgagee, interest rates and even whether to lend to a prospective mortgagee based on an assessment of risk factors. AI is also being implemented in critical roles in the health care sector with AI applications assisting in the diagnostic process, screening patients and predicting risks for certain diseases and health outcomes.

While these and other innovative uses of AI have driven advancements in efficiency, predictability and cost control, if not employed thoughtfully they may leave companies vulnerable to claims of bias, discrimination or other unlawful conduct. The relative risk involved with AI applications varies across industries and depends in large part on the role AI plays in a business’s interaction with its customers and decision making. As described below, many of the laws and regulations targeted at AI involve concerns over AI’s role in unlawful discrimination, biased decision making and use of information in a manner that violates privacy and data protection laws.

AI regulation on the state level

At this time, the U.S. federal government has not developed a comprehensive or coherent strategy for regulating AI. States have attempted to fill the void by developing their own regulatory regimes to address what they see as the greatest potential risks presented by the use of AI. To date, Illinois, Maryland, New York and California have shown themselves to be the most active in this regard. Illinois’ passage of the Biometric Information Privacy Act (BIPA) in 2008 represented one of the first efforts to regulate AI.1 BIPA requires an entity to provide clear and adequate notice and to obtain consent before collecting the biometric identifiers of a consumer for any use, including AI. BIPA has garnered headlines over the past few years by generating hundreds of lawsuits, not only because of its stringent notice and consent requirements but also because BIPA includes a private right of action and liquidated damages for individuals harmed by BIPA violators, making it a favorable vehicle for class action suits. In addition to BIPA, Illinois enacted the Artificial Intelligence Video Interview Act (AIVIA) in 2019.2 The law requires employers using AI technology as part of the screening or hiring process to notify applicants (i) that AI may be used to analyze their interview, (ii) how AI technology works and (iii) what characteristics AI uses to evaluate applicants, as well as obtain each applicant’s consent to be evaluated by AI.

The State of Maryland enacted a law similar to AIVIA that requires employers to obtain an applicant’s consent to use facial recognition technology in interviews.3 Unlike the requirements in AIVIA, Maryland’s law simply prohibits the use of facial recognition technology during job interviews unless the applicant consents to its use.

In 2021, New York City legislators passed a law that regulates the use of automated employment decision tools by employers, requires employers that use AI to audit their AI systems, and penalizes employers that engage in biased conduct arising from the use of AI in the hiring process.4

California, like Illinois and Maryland, also has its eye on restricting the use of facial recognition tools and automated systems. In March 2022, California’s Civil Rights Council (CRC) (formerly the Fair Employment and Housing Council) published draft modifications to its antidiscrimination law, which would hold employers liable for the use of AI in their employment decisions where such use has a discriminatory impact.5 The CRC’s budget for 2023 indicates that it will help advance its AI initiative. Further, in August 2022, California’s attorney general requested information from hospitals in the state on how health care facilities and other providers are identifying and addressing racial and ethnic disparities in health care algorithms.6

Other states have followed suit by rolling out their own AI regulations. Starting this year, consumer privacy laws in Colorado,7 Connecticut8 and Virginia9 will provide their residents with a right to opt out of AI profiling activity related to decision making. Further, new laws in Virginia and Colorado will require businesses to offer an opt out regarding the processing of consumers’ data. As AI continues to proliferate across various sectors, more states will develop their own regulatory regimes for AI technology.

AI regulation on the national level As noted above, Congress has yet to pass legislation regarding the privacy issues and other potential concerns raised by the broad implementation of AI technologies; however, some federal agencies have begun to publish their own guidelines. In May 2022, the Equal Employment Opportunity Commission (EEOC), which focuses on the rights of employees and job applicants, issued guidance outlining how the Americans with Disabilities Act (ADA) may apply to an employer’s use of AI.10 The EEOC stated that an employer’s use of AI can violate the ADA where (i) the employer does not provide “reasonable accommodation” as required by the ADA, (ii) the employer intentionally or unintentionally screens out an individual with a disability who is qualified for a position, or (iii) the employer’s AI technology violates the ADA’s restriction on disability-related inquiries. The EEOC also noted that even in cases where an employer is using a third party’s AI, the employer itself could still be held liable for AI technologies violating the ADA.

In addition to the EEOC’s guidance on the use of AI in the workplace, the Federal Trade Commission issued its own set of guidelines on the use of AI in 2021. The FTC, which derives its enforcement authority against businesses that engage in unfair and deceptive practices from Section 5 of the FTC Act,11 has focused on AI from the consumer protection perspective. In that regard, it has advised companies to take a “responsible AI by design” approach, which involves (i) considering ways to improve the data sets used to train the AI as well as to anticipate and solve any shortcomings, (ii) watching for discriminatory outcomes, (iii) embracing transparency and independence by conducting and publishing the results of independent audits, (iv) being open and realistic about what the AI can and cannot do, (v) telling the truth about how data is being used, (vi) doing more good than harm, and (vii) holding themselves accountable. The auditing functions recommended by the FTC play an especially important role in bringing accountability to deep learning or black box AI applications, where developers may not be able to ascertain the basis for the application’s decisions or outcomes solely by analyzing the data inputs.

The White House Office of Science and Technology Policy recently published the “Blueprint for an AI Bill of Rights.”12 The Bill of Rights focuses on the development of safe and effective systems, algorithmic discrimination protections, data privacy, the need for notice and explanation, and the use of alternatives or opt-out rights. Specifically, the proposal notes that the public should (i) be protected from unsafe or ineffective systems through the use of extensive testing and risk identification processes, (ii) not face discrimination by algorithms with systems used and designed in an equitable manner, (iii) be protected from abusive practices through the implementation of built-in protections and have the ability to exert control over the use of data, (iv) know that an automated system is being used and understand how and why it contributes to outcomes that impact them, and (v) be able to opt out of automated systems and have a human alternative readily available to assist them.

Key themes to keep in mind

On both the state and federal levels, regulation and guidance focus on the same four pillars: fairness, explainability, transparency and accountability. The fairness principle is codified in the call for AI systems that are free of bias, whether intentional or unintentional. Explainability refers to a focus on laws requiring that companies be capable of explaining how and why AI technologies make decisions. As for transparency, government entities and legislation call for companies to be open about how their AI technology works and when automated systems are being used. Finally, the accountability principle is reflected in the call for companies to continuously inspect and interrogate their AI technologies so they can identify and address any shortcomings.

What’s to come?

Over the coming year, we expect to see enhanced enforcement of current laws as well as the promulgation of new laws and regulations relating to AI. Complementing the regulatory process, we anticipate the publication of version 1.0 of an AI risk management framework and institutional guidelines by the National Institute of Standards and Technology (NIST), which focuses on the development of industrywide technology standards.13 The objective of the framework is to better manage the risks that AI presents to individuals, organizations and society as a whole. Voluntary compliance with NIST guidelines has been used to show good-faith efforts to comply with generally accepted industry best practices, which, in turn, can help mitigate liability in other contexts. Finally, while we have not addressed international developments concerning AI regulations in this article, we expect the continued development of European Union (EU) AI regulations to inform the views of U.S. regulators as they continue to assess AI and develop a more comprehensive regulatory structure to advance the objectives of fairness, explainability, transparency and accountability.


Il. St. Ch. 740 § 14.
Il. St. Ch. 820 § 42.
MD Code, Labor and Employment § 3-717.
New York City, Local Law No. 144 Int. No. 1894-A (2021).
Cal. Code of Reg. Tit. 2., Div. 4.1, Ch. 5, Subch. 2 (proposed).
Press Release, Office of the Attorney General of California, “Attorney General Bonta Launches Inquiry into Racial and Ethnic Bias in Healthcare Algorithms (Aug. 31, 2022), https://oag.ca.gov/news/press-releases/attorney-general-bonta-launches-inquiry-racial-and-ethnic-bias-healthcare.
Colorado Attorney General, “Colorado Privacy Act (CPA) Rulemaking,” https://coag.gov/resources/colorado-privacy-act/.
Conn. Substitute Senate Bill No. 6 Pub. Act. No. 22-15.
Va. Code Sec. 59.1-573(A)(5) (2021).
10 U.S. Equal Employment Opportunity Commission, “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees” (May 12, 2022), https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-andartificial-intelligence.
11 15 U.S.C. § 45.
12 White House Office of Science and Technology Policy, “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” (Oct. 2022), https://www.whitehouse.gov/wp-content/ uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.
13 NIST, AI Risk Management Framework Playbook, https://pages.nist.gov/AIRMF/.