Firm News

Generative Artificial Intelligence and the Legal Industry: New Jersey and Florida Weigh In 

By Jacqueline Muttick & Kenneth O’Donohue

While attorneys have been navigating both the implementation and implications of artificial intelligence in the legal industry, the courts and state legislatures have been weighing the ethical hazards related to use of this emerging technology. Two recent examples are Florida and New Jersey. Florida has now issued ethical guidelines for the use of artificial intelligence in the legal industry, while New Jersey has released preliminary guidelines for it.

Generative artificial intelligence (“AI”) utilizes neural networks comprised of large datasets to generate content. This is how a program like ChatGPT takes a prompt, performs automated analysis using predictive models, and provides a text output to the user. While this can be a valuable tool when used correctly, the novelty of AI has led to unanticipated outcomes, including the generation of false information. The consequences of such false information have resulted, in some extreme cases, in attorney sanctions and ethical complaints against lawyers that have improperly utilized AI. In response, courts have established committees to offer guidance to both the courts and legal counsel.

Florida’s guidelines (Opinion 24-1) stress the need for confidentiality, accuracy, competency, and adherence to current ethical standards. Florida does not bar the use of AI, but does require oversight to ensure that firms establish polices for the use of AI in research, drafting, and communications. All law firm personnel, including third-party vendors, should be aware of and adhere to these policies. AI results must be verified for accuracy and truthfulness, and must be consistent with attorney professional and ethical standards.

The Florida guidelines also address legal billing. Any costs associated with the use of AI should be disclosed to clients, and any increases in efficiency should be reflected in invoices. Further, lawyer advertising created by AI must adhere to current legal advertising standards. To the extent that AI is used on a lawyer’s website as a chatbot, that program should be modified to conform to ethical rules, including disclosing it is a chatbot, limiting responses to avoid giving legal advice, and including screening questions to prevent communications with website visitors already represented by counsel.

Florida’s opinion also stresses that confidentiality must be maintained when utilizing AI. Users must prevent disclosure of confidential information, particularly because AI programs may store and save user prompts thereby retaining confidential information. Before utilizing AI, attorneys should determine the security protocols concerning user prompts, as well as how user prompt information is utilized, retained, and destroyed. Further, if confidential information must be disclosed to an AI program, client consent may be required.

New Jersey’s preliminary guidelines, while not as robust as those issued by Florida, still provide direction by stressing that AI does not change the core ethical responsibilities for lawyers.

New Jersey emphasizes that lawyers should engage with AI carefully, focusing on accuracy and truthfulness which includes verifying all information generated by AI. Lawyers may use AI in interacting with clients, and, if asked, must disclose its utilization. Lawyers should remain cognizant of the possibility that AI may generate false information and should therefore confirm all generated information. Disclosure of confidential information must be avoided and it is incumbent upon lawyers using AI to ensure the security of confidential information. Firms and lawyers are responsible for overseeing other attorneys and staff in the ethical use of AI.

Overall, the Florida and New Jersey guidelines underscore the ethical duties already incumbent upon all attorneys and provide direction on considerations that must be undertaken prior to utilizing AI in legal work. With the proliferation of AI in a variety of contexts, including those that may not be obvious at first glance, attorneys must remain attentive and conscientious in adhering to current ethical requirements, and must familiarize themselves with the emerging technology prior to utilization.

I’m Afraid I Can’t Do That Dave: Your Indemnification Agreement May Not Protect You Against Biden’s AI Discrimination Crackdown

AI Brain graphic

On October 30, 2023, President Joseph Biden signed the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”, proclaiming “[the] Federal Government will enforce existing consumer protection laws and principles and enact appropriate safeguards against fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI.” In a subsequent speech, Vice President Kamala Harris emphasized that there was “a moral, ethical and societal duty to… address predictable threats, such as algorithmic discrimination,” a term which describes when AI systems “contribute to unjustified different treatment or impacts disfavoring people based on [race, sex, age, and other protected traits].”

The Executive Order is the latest in a series of Biden Administration pronouncements and policy decisions designed to police algorithmic discrimination using the powers of the Equal Opportunity Commission (“EEOC”) and Title VII of the Civil Rights Act, which prohibit discrimination against employees or prospective employees. The Executive Order directs the Attorney General to convene a meeting of Federal Civil rights officials by the end of January, 2024, to discuss coordination and enforcement measures with the Department of Justice. Clearly, in the years to come, employers will need to insulate themselves from charges of algorithmic discrimination stemming from AI-influenced hiring decisions.

The Department of Justice is particularly sensitive to concerns about AI discrimination in the hiring process because the use of AI has grown exponentially in recent years. Indeed, an August 2023 survey from Zippia found that 35-45% of all companies and 99% of Fortune 500 Companies use AI to assist with employee recruitment.

The concern is that AI, trained on pattern recognition through review of historical hiring data, will develop biases against certain applicants. For example, in 2018, Amazon found that an AI program designed to assist with recruitment of software developers showed bias against resumes which contained the word “women”. The genesis of the problem was that the AI program was trained to find prospective candidates by reviewing resumes submitted by candidates over the preceding ten years, most of whom were male. As such, the AI program unintentionally discriminated against prospective female recruits and potentially exposed Amazon to civil liability. Similarly, in August 2023, the EEOC settled a first-of its-kind lawsuit brought in the Eastern District of New York for $365,000 against iTutorGroup Inc., a company which provided English tutors to students in China. The lawsuit alleged that iTutorGroup utilized AI screening tools to automatically reject female job applicants over age 55 and male job applicants over age 60.

Traditional Indemnification Agreements May Not Be Sufficient To Protect Your Business From Claims of Algorithmic Discrimination

Employers must take special care when seeking indemnification for claims of algorithmic discrimination in hiring because contractual indemnification agreements are barred under existing Title VII case law. In the matter of the Equal Employment Opportunity Commission v. Blockbuster, Inc., the United States District Court for the District of Maryland rejected Blockbuster’s attempt to bring a contractual indemnification claim against a staffing company which had provided it with temporary workers, holding that “the primary goal of Title VII to eradicate discriminatory conduct would be thwarted if Blockbuster were permitted to contract around its obligations and shift its entire responsibility for complying with Title VII.” EEOC v. Blockbuster, No. RWT 07cv2612, 2010 U.S. Dist. LEXIS 2889, at *9 (D. Md. Jan. 13, 2010). Multiple other cases have held similarly, finding that an “indemnification agreement would undermine any incentive to abide by the terms of the act.”1

Therefore, under existing Title VII case law, if an employer contracts to be indemnified against claims of algorithmic discrimination in the hiring process, the contract is most likely unenforceable.2 This begets the question: How can an employer protect itself against claims of algorithmic discrimination?

THE INSURANCE INDUSTRY HAS AN OPPORTUNITY TO OFFER TECHNOLOGY ERRORS AND OMISSIONS COVERAGE TO PROTECT EMPLOYERS FROM THE NEW DEPARTMENT OF JUSTICE ENFORCEMENT MEASURES

An errors and omissions policy (“E&O”) is “intended to insure a member of a designated calling against liability arising out of the mistakes inherent in the practice of that particular profession or business.” Watkins Glen Cent. Sch. Dist. v. Nat'l Union Fire Ins. Co., 286 A.D.2d 48, 51, 732 N.Y.S.2d 70, 72 (2d Dep’t 2001). An example of the language used in a Technology E&O policy can be found in the 8th Circuit decision of St. Paul Fire & Marine Ins. Co. v. Compaq Comput. Corp., 539 F.3d 809, 813 (8th Cir. 2008), where the policy in question states “[w]e'll pay amounts any protected person is legally required to pay as damages for covered loss that . . . is caused by an… error, omission or negligent act." Said policies usually cover the employer’s legal fees as well. Employers looking to mitigate the risk of liability from claims of algorithmic discrimination in hiring should consider obtaining Technology E&O coverage which insures that the AI hiring software being utilized is free from bias and which provides coverage for any incidental or consequential damages.

Unlike the policy at issue in EEOC v. Blockbuster, No. RWT 07cv2612, 2010 U.S. Dist. LEXIS 2889, at *9 (D. Md. Jan. 13, 2010), a Technology E&O policy would not need to specifically reference coverage for claims made by the EEOC. Instead, the policy should be drafted so that the focus of the coverage pertains to claims based upon errors in the AI software. With careful drafting focusing upon the performance of the software, rather than indemnification against discriminatory conduct, the parties to the insurance agreement could perhaps avoid the problems caused by the case law barring indemnification for Title VII claims of discrimination.

A number of companies have already contemplated the need for Technology E&O insurance against claims of algorithmic discrimination. It still remains to be seen whether these insurance policies are ruled enforceable. Moreover, it remains to be seen what additional measures, if any, the Department of Justice and the Attorney General will announce later this month to police the threat of algorithmic discrimination.

1 See Maness v. Vill. of Pinehurst, 522 F. Supp. 3d 166, 172 (M.D.N.C. 2021); See also Brown v. OMO Grp., Inc., No. 9:14-cv-02841-DCN, 2017 U.S. Dist. LEXIS 45048, at *11 n.4 (D.S.C. Mar. 28, 2017);Cordova v. FedEx Ground Package Sys., 104 F. Supp. 3d 1119, 1136 (D. Or. 2015); Thurmond v. Drive Auto. Indus. of Am., Inc., 974 F. Supp. 2d 900, 906-07 (D.S.C. 2013).

2 Similarly, if an employer obtains software which utilizes AI to assist in the hiring process and if the use of that software leads to claims of hiring discrimination, the same case law may operate to bar claims for indemnification against the software developer.

Maria Miller Victory for Contractor Client in Brooklyn

Kudos and warm round of cheers to gartner + bloom PC trial attorney, Maria Miller Esq on her immense victory earlier this week in Brooklyn.

Maria’s contractor client was facing the possibility of full liability for a ceiling collapse, a case where the injured plaintiff’s demand at trial was $5 million dollars. Prior to the trial commencing, a settlement offer of $100,000 was extended but rejected.

Maria picked the jury, argued the in limine motions, gave the opening statement to the jury, and cross-examined three witnesses including the plaintiff. Then, on the eve of jury deliberations and a verdict, the third-party plaintiff gave in and accepted what had been offered to them before the trial -- $100k ~ (2% of the claim).

This is an extraordinary result given the fact that our client was facing an uninsured judgment in seven figures and the opposition accepted the carrier’s pre-trial offer.

This incredible resolution eliminated the risk of a significant seven-figure money judgment against our contractor client, and only made possible by innovative, aggressive trial strategy that illuminated the weaknesses in the adversary’s case at every step of the litigation process. Brava Maria Miller Esq on this win and many more!

Construction Claim with Damages in Excess of $10M Defeated for a Fraction of the Demand

In a collaborative team effort with an ingenious defense strategy laid out by Managing Partner, Ken Bloom, a construction claim with damages in excess of $10M was defeated for a mere fraction of the demand!

In an iconic commercial tower in NYC’s Financial District, a sprinkler pipe burst in 2019. The Plaintiff brought suit claiming that it was the faulty installation of a specific coupler within the fire suppression system that caused a pipe separation, ensuing flood, which caused millions of dollars in damage especially with regard to the super high speed elevators.

The case was litigated in the Commercial Division in the Supreme Court, New York County by a well known, sophisticated and aggressive commercial law firm.

Managing Partner, Ken Bloom developed a defense strategy that involved recruiting experts in the field of fire suppression, metallurgy and elevators; shifting responsibility to other parties on various products liability theories as well as those involving fluid dynamics.

Based on the analysis and findings from working with our fire suppression, metallurgy and elevator experts, Partner Todd Shaw was able to develop a winning strategic and tactical litigation game plan.

Partner Roy Michael Anderson led upwards of 20 fact witness and expert depositions, and was able to put every adverse witness on the defensive, eviscerating opposing fact and expert witnesses with his knowledge of highly technical issues and extraordinary detailed questioning.

Senior Associate, Michael Hemway spearheaded and oversaw a robust discovery campaign while managing hundreds and thousands of documents with our e-discovery partners.

Not only was this victory due to the strength of the players on the team but to innovative strategies and technological advancements deployed from the onset of the case and throughout to procure an incredibly powerful win for our client.

Most importantly and all along the journey, the team worked very closely with our clients to glean valuable data insights throughout every step of the litigation process.

A standing ovation and grand Kudos to Partners Ken Bloom, Todd Shaw, Roy Michael Anderson, and Michael Hemway as well as all our support team members who contributed to this monumental victory that has already and will continue to blaze trails for similar cases to come!!

NEW CHANGE TO RULES OF EVIDENCE: Any Effect?

There is a new Federal evidence rule change taking effect in two months, and tort practitioners will need to take notice now.

“1. A witness who is qualified as an expert by knowledge, skill, experience, training, or education may testify in the form of an opinion or otherwise if the proponent has demonstrated by a preponderance of the evidence that:
(a) the expert’s scientific, technical, or other specialized knowledge will help the trier of fact to understand the evidence or to determine a fact in issue.
(b) the testimony is based on sufficient facts or data.
(c) the testimony is the product of reliable principles and methods; and
(d) the expert’s opinion reflects a reliable application of the principles and methods to the facts of the case.”

As of December 1, 2023, Federal Rule of Evidence 702 (“FRE 702”) – which governs whether you will be permitted to have your expert witness testify - will change. Here is the text of the new rule, with the changes highlighted:

If the above four elements of expert witness admissibility are not proved by a preponderance of evidence, then the proposed expert witness will be precluded.

At first blush these modifications seem innocuous, and experienced attorneys will wonder how the amendments change what has presumably been the expert admissibility standard in place since the 1993 Daubert case. The question is well-grounded.

The amendments to FRE 702 do not change the expert witness admissibility standard, so much as they instruct the trial Courts to utilize the proper standard. For example, the new text makes clear that, (a) it is the proponent of the expert witness (whether plaintiff or defendant) who has the burden of proving the four elements of expert admissibility, (b) it is the proponent of the expert witness who must prove all four elements by a preponderance of evidence, and (c) there is no ‘presumption’ for expert admissibility, or indeed for any of the four elements, to be given by the Courts. While these clarifications have always been the standard, Courts have been inconsistent in their application.

We see three major effects for tort practitioners who find themselves in Federal court, or in a jurisdiction that follows the Federal standard for expert witness admissibility.

First, both sides of the Bar will need to muster evidence – data, studies, surveys, peer-reviewed literature, textbooks – to prove all four elements under FRE 702. Given the not-so-subtle instruction to the trial Courts to look more closely at expert witness admissibility, practitioners will now have to pay much more attention to this than paid previously.

Second, the amendments to FRE 702 are an invitation to the Bar to make more preclusion motions (often termed Daubert motions). Closer judicial scrutiny of expert witness admissibility makes it obvious both sides will take advantage of such motions, and given the initial burden of proof at trial on plaintiffs generally the FRE 702 amendments are more likely to favor defendants.

Finally, there is now a whole host of case law applying a standard inconsistent with the new FRE 702. Drafting of motions and briefs must be done carefully to weed out that now non-relevant case law. By way of example, only one week ago the District Court for the Eastern District of New York decided a preclusion motion directed at several potential expert witnesses in an e-cigarette trademark infringement case. Fantasia Distrib., Inc. v. Cool Clouds Distrib., Inc., 2023 U.S. Dist. LEXIS 167641 While the Fantasia Court’s determinations under the old FRE 702 – mostly precluding the experts – may have been the same under the new FRE 702, the Court’s recitation of applicable case law and standards would not be the same. Here are some of the Fantasia Court’s recitations that the Bar should no longer see as of December 1:
"Qualification as an expert is viewed liberally …."
Fantasia Distrib., Inc. v. Cool Clouds Distrib., Inc., 2023 U.S. Dist. LEXIS 167641, *11

“The Second Circuit has cautioned, however, that courts ‘should only exclude [expert] evidence if the flaw [in the expert's reasoning or methodology] is large enough that the expert lacks good grounds for his or her conclusions.’"
Fantasia Distrib., Inc. v. Cool Clouds Distrib., Inc., 2023 U.S. Dist. LEXIS 167641, *13

“Although ‘the district court may . . . exclude opinion evidence where the court concludes that there is simply too great an analytical gap between the data and the opinion proffered . . . gaps or inconsistencies in the reasoning leading to the expert's opinion [generally] go to the weight of the evidence, not to its admissibility.’"
Fantasia Distrib., Inc. v. Cool Clouds Distrib., Inc., 2023 U.S. Dist. LEXIS 167641, *14

“In general, questions as to the usefulness of the expert testimony ‘should [] be resolved in favor of admissibility unless there are strong factors . . . favoring exclusion[.]’"
Fantasia Distrib., Inc. v. Cool Clouds Distrib., Inc., 2023 U.S. Dist. LEXIS 167641, *15