AI Brain graphic

On October 30, 2023, President Joseph Biden signed the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”, proclaiming “[the] Federal Government will enforce existing consumer protection laws and principles and enact appropriate safeguards against fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI.” In a subsequent speech, Vice President Kamala Harris emphasized that there was “a moral, ethical and societal duty to… address predictable threats, such as algorithmic discrimination,” a term which describes when AI systems “contribute to unjustified different treatment or impacts disfavoring people based on [race, sex, age, and other protected traits].”

The Executive Order is the latest in a series of Biden Administration pronouncements and policy decisions designed to police algorithmic discrimination using the powers of the Equal Opportunity Commission (“EEOC”) and Title VII of the Civil Rights Act, which prohibit discrimination against employees or prospective employees. The Executive Order directs the Attorney General to convene a meeting of Federal Civil rights officials by the end of January, 2024, to discuss coordination and enforcement measures with the Department of Justice. Clearly, in the years to come, employers will need to insulate themselves from charges of algorithmic discrimination stemming from AI-influenced hiring decisions.

The Department of Justice is particularly sensitive to concerns about AI discrimination in the hiring process because the use of AI has grown exponentially in recent years. Indeed, an August 2023 survey from Zippia found that 35-45% of all companies and 99% of Fortune 500 Companies use AI to assist with employee recruitment.

The concern is that AI, trained on pattern recognition through review of historical hiring data, will develop biases against certain applicants. For example, in 2018, Amazon found that an AI program designed to assist with recruitment of software developers showed bias against resumes which contained the word “women”. The genesis of the problem was that the AI program was trained to find prospective candidates by reviewing resumes submitted by candidates over the preceding ten years, most of whom were male. As such, the AI program unintentionally discriminated against prospective female recruits and potentially exposed Amazon to civil liability. Similarly, in August 2023, the EEOC settled a first-of its-kind lawsuit brought in the Eastern District of New York for $365,000 against iTutorGroup Inc., a company which provided English tutors to students in China. The lawsuit alleged that iTutorGroup utilized AI screening tools to automatically reject female job applicants over age 55 and male job applicants over age 60.

Traditional Indemnification Agreements May Not Be Sufficient To Protect Your Business From Claims of Algorithmic Discrimination

Employers must take special care when seeking indemnification for claims of algorithmic discrimination in hiring because contractual indemnification agreements are barred under existing Title VII case law. In the matter of the Equal Employment Opportunity Commission v. Blockbuster, Inc., the United States District Court for the District of Maryland rejected Blockbuster’s attempt to bring a contractual indemnification claim against a staffing company which had provided it with temporary workers, holding that “the primary goal of Title VII to eradicate discriminatory conduct would be thwarted if Blockbuster were permitted to contract around its obligations and shift its entire responsibility for complying with Title VII.” EEOC v. Blockbuster, No. RWT 07cv2612, 2010 U.S. Dist. LEXIS 2889, at *9 (D. Md. Jan. 13, 2010). Multiple other cases have held similarly, finding that an “indemnification agreement would undermine any incentive to abide by the terms of the act.”1

Therefore, under existing Title VII case law, if an employer contracts to be indemnified against claims of algorithmic discrimination in the hiring process, the contract is most likely unenforceable.2 This begets the question: How can an employer protect itself against claims of algorithmic discrimination?


An errors and omissions policy (“E&O”) is “intended to insure a member of a designated calling against liability arising out of the mistakes inherent in the practice of that particular profession or business.” Watkins Glen Cent. Sch. Dist. v. Nat'l Union Fire Ins. Co., 286 A.D.2d 48, 51, 732 N.Y.S.2d 70, 72 (2d Dep’t 2001). An example of the language used in a Technology E&O policy can be found in the 8th Circuit decision of St. Paul Fire & Marine Ins. Co. v. Compaq Comput. Corp., 539 F.3d 809, 813 (8th Cir. 2008), where the policy in question states “[w]e'll pay amounts any protected person is legally required to pay as damages for covered loss that . . . is caused by an… error, omission or negligent act." Said policies usually cover the employer’s legal fees as well. Employers looking to mitigate the risk of liability from claims of algorithmic discrimination in hiring should consider obtaining Technology E&O coverage which insures that the AI hiring software being utilized is free from bias and which provides coverage for any incidental or consequential damages.

Unlike the policy at issue in EEOC v. Blockbuster, No. RWT 07cv2612, 2010 U.S. Dist. LEXIS 2889, at *9 (D. Md. Jan. 13, 2010), a Technology E&O policy would not need to specifically reference coverage for claims made by the EEOC. Instead, the policy should be drafted so that the focus of the coverage pertains to claims based upon errors in the AI software. With careful drafting focusing upon the performance of the software, rather than indemnification against discriminatory conduct, the parties to the insurance agreement could perhaps avoid the problems caused by the case law barring indemnification for Title VII claims of discrimination.

A number of companies have already contemplated the need for Technology E&O insurance against claims of algorithmic discrimination. It still remains to be seen whether these insurance policies are ruled enforceable. Moreover, it remains to be seen what additional measures, if any, the Department of Justice and the Attorney General will announce later this month to police the threat of algorithmic discrimination.

1 See Maness v. Vill. of Pinehurst, 522 F. Supp. 3d 166, 172 (M.D.N.C. 2021); See also Brown v. OMO Grp., Inc., No. 9:14-cv-02841-DCN, 2017 U.S. Dist. LEXIS 45048, at *11 n.4 (D.S.C. Mar. 28, 2017);Cordova v. FedEx Ground Package Sys., 104 F. Supp. 3d 1119, 1136 (D. Or. 2015); Thurmond v. Drive Auto. Indus. of Am., Inc., 974 F. Supp. 2d 900, 906-07 (D.S.C. 2013).

2 Similarly, if an employer obtains software which utilizes AI to assist in the hiring process and if the use of that software leads to claims of hiring discrimination, the same case law may operate to bar claims for indemnification against the software developer.