update-banner

Two Critical Ethical Issues to Keep In Mind As We Prepare for the Advent of Artificial Intelligence in the Practice of Law

By Wendy Wen Yun Chang

Artificial intelligence (AI) has been around conceptually for some time. However, recent significant advances by AI companies in the legal services space have propelled the conversation about AI technology to the forefront of legal and mainstream media. Today, articles abound discussing the potential impact of AI on the profession, with some musing whether AI is the harbinger of the demise of the need for human lawyers. Lawyers debate the role regulators should take in response to these developments. Tech leaders and community advocates ponder the potential for AI technology to close an ever widening access to justice gap.

While the chatter may feel remote, practitioners should know two things. First, AI is coming, and some argue it is already here in early manifestations. Second, it is critical that practitioners understand their ethical obligations relating to AI.

What is AI?

There is wide variance in the definition of AI, but a baseline definition is "the ability of a machine to perform what normally can be done by the human mind. . . [using] automated computer-based means to process and analyze large amounts of data and reach rational conclusion—the same way the human mind does."[1] While it hopes to replicate the human mind, AI is projected to eventually outpace it.[2] Unquestionably, this is a very large task, and AI technology does not yet think like a human. As defined in this manner, AI has not yet arrived, but it is getting closer and closer. Today's highly advanced legal technology is able to manage and/or analyze large amounts of data in response to inquiries in legal research, due diligence, and document review; it can compare contract language and legal briefs; and it can predict outcomes. Today's technology can even learn, taking a user's interaction to further refine its process and return a more refined response. It can repeat that cycle, with more accurate responses over time. Some already call today's advanced technology AI. No matter which definition one subscribes to, two critical ethical obligations are triggered.

Duty of Competence

Even when technology is new, an attorney's core ethical obligation to be competent in the legal services to be provided remains constant.[3] Rule 3-110 (A) of the California Rules of Professional Conduct states that "[a] member shall not intentionally, recklessly, or repeatedly fail to perform legal services with competence."[4] Competence in legal services means to apply the diligence, learning and skill, and mental, emotional, and physical ability reasonably necessary for the performance of such service.[5] A lawyer can cure a lack of competence by acquiring it or associating/consulting with someone with the necessary skill.[6] That association/consultation can be with another lawyer, a non-lawyer expert, or even a client.[7] A lawyer may also avoid a competency violation by declining the representation.

All of this may look familiar, and it should. This is the duty of technology competence, and practitioners are currently subject to it as a result of one massive transformation of the practice of law that has already occurred: changing the profession from one based in hard-copy books, notes on yellow pads, typewriters, paper letters, landline phones, and facsimiles, to one based in computers, mobile hardware (laptops, tablets, phones, etc.), e-mails, the Internet, and digital data. The law is clear that no matter how lawyers choose to provide legal services, the core ethical duties underlying it have remained the same.[8] What changes with different methods of providing legal services is what an attorney is required to do to meet those underlying core ethical standards.[9]

Technology, of course, often works behind the scenes, invisible to the naked eye. By design, AI technology seeks to make the process of getting from user entry to a reasoned deliverable result smoother and more accurate, with fewer required intervening steps by the user to reach the conclusion. Ironically, it is this promised increased efficiency and precision to a result, combined with its invisible pathway, that raises the real risk of the failure of technology competence in AI use. Humans often overestimate what machines can do and then compound that error by inflating their expectations of what a computer should be able to do.[10] It is easy to type a command into a computer. Ostensibly, when faced with what looks to be a good [and purportedly better] answer that appeared seamlessly and quickly, it would be even easier for a lawyer to over-rely on the technology and assume the answer that came back is correct, in blind reliance that what came back is exactly what the program said  would come back. From where the attorney sits, it all looks correct. But before simply accepting-the response as right, will the attorney know what he or she does not know? Will the attorney  recognize if he or she asked the wrong question in the beginning? Will he or she know if all the necessary information was entered? If the attorney did not use the software correctly, will he or she be aware of that fact?[11]

Blind reliance, of course, is exactly what technology competence says that an attorney must not do.[12] An attorney using technology should know enough about the technology to understand what he or she does not know and take steps to correct any gaps in order to be confident that the use of the technology complies with the ethical duty of competence (either through the attorney's own knowledge, or through further education, or association/consultation with someone knowledgeable).[13] The attorney must review and test results received from AI technology, search for any red flags or anomalies and, if necessary, modify the inquires, provide different data, or make adjustments to be sure that the results, which will be used to provide legal advice or services, are ethically compliant and independently his or hers.[14] These requirements will not change even if the computer purports to be able to do these types of services better than a human.

A subset of an attorney's duty of competence is an attorney's duty to supervise the work of subordinate attorneys and non-attorney employees or agents.[15] In addition to the above duties, attorneys must take reasonable steps to train and make sure all attorneys and staff use of technology is also ethically compliant. This is not an easy task, especially as comfort levels with technology varies widely and training is non-billable and time intensive. Attorneys should consider what steps they should take for recalcitrant individuals. Finally, because technology and people evolve, attorneys should take steps to periodically monitor and reassess the firm's ethical compliance across the board.

Duty of Confidentiality

The second core duty of an attorney is that of confidentiality. An attorney must "maintain inviolate the confidence, and at every peril to himself or herself to preserve the secrets, of his or her client."[16] "Secrets" include "information, other than that protected by the attorney-client privilege, that the client has requested be held inviolate or the disclosure of which would be embarrassing or would be likely to be detrimental to the client."[17] With limited exceptions, a client's confidential information may not be revealed absent a client's informed written consent.[18]

Attorneys using AI technology should exercise due diligence when using specific technology, including assessing then implementing appropriate security measures to protect client's confidential information.[19] Attorneys must also exercise due diligence in vendor selection with any AI technology used,[20] and pay attention to any terms in a vendor contract that may threaten client confidentiality —such as vendor assertion of ownership or possession of client data, what happens to client data upon breach or after termination of any relationship with the vendor, and factors constituting termination and/or breach.[21]

AI technology, in its fullest manifestations, will continue to trigger significant conversations about its risks and benefits. In the legal technology space, AI technology holds the potential to assist lawyers in providing legal services faster and better. Lawyers should keep their ethical obligations at the forefront as they embark on using the technology to enhance their law practices.

Wendy Wen Yun Chang is a partner in the Los Angeles office of Hinshaw & Culbertson LLP. She is a member of the American Bar Association's Standing Committee on Ethics and Professional Responsibility and the Los Angeles County Bar Association's Professional Responsibility and Ethics Committee. She served as an advisor to the State Bar of California's Commission for the Revision of the Rules of Professional Conduct and as a past chair of the State Bar of California's Standing Committee on Professional Responsibility and Conduct. She is a Certified Specialist in Legal Malpractice Law by the State Bar of California's Board of Legal Specialization. She can be reached at wchang@hinshawlaw.com and found on Twitter at @wendychang888. The views expressed herein are her own.



[1] Wendy Wen Yun Chang, What Are the Ethical Implications of Artificial Intelligence Use in Legal Practice?, 33 Law. Man. Prof. Conduct 284 (Bloomberg BNA May 2017).

[2] Id.

[3] Cal. State Bar Formal Opn. No. 2015-193.

[4] Unless otherwise indicated, all references to "CRPC" shall reference the California Rules of Professional Conduct.

[5] CRPC Rule 3-110(B).

[6] CRPC Rule 3-110(C).

[7] Cal. State Bar Formal Opn. No. 2010-179, 2015-193.

[8] See, e.g., Cal. State Bar Formal Opn. No. 2010-179, 2015-193; See, also, Wendy Wen Yun Chang, What Are the Ethical Implications of Artificial Intelligence Use in Legal Practice?, 33 Law. Man. Prof. Conduct 284 (Bloomberg BNA May 2017).

[9] Id.

[10] Ed Walters, Sorting Through the Hype: A Practical Look at the Impact of Artificial Intelligence on the Legal Profession, Legal Malpractice & Risk Management Conference, Mar. 3, 2017, Chicago, IL.

[11] See Cal. State Bar Formal Opn. No. 2015-193 for a good analysis on a hypothetical of an attorney's overreliance on technology, and the ethical implications of it.

[12] See, e.g., Cal. State Bar Formal Opn. No. 2015-193.

[13] Cal. State Bar Formal Opn. No. 2015-193; Wendy Wen Yun Chang, What Are the Ethical Implications of Artificial Intelligence Use in Legal Practice?, 33 Law. Man. Prof. Conduct 284 (Bloomberg BNA May 2017).

[14] Id.; See also, e.g., Model Rule 2.1 (CRPC Rule 1-100(A) (Ethics opinions and rules and standards promulgated by other jurisdictions and bar associations may be considered)).

[15] CRPC Rule 3-110, discussion.

[16]Cal. Bus. & Prof. Code, § 6068 (e)(1); CRPC Rule 3-100.

[17] Cal. State Bar Formal Opn. No. 1988-96.

[18] CRPC Rule 3-100(A); Cal. State Bar Formal Opn. No. 2010-179.

[19] Cal. State Bar Formal Opn. No. 2010-179; See also ABA Formal Opn. No .477R.

[20] Cal. State Bar Formal Opn. No. 2012-184; See also ABA Formal Opn. No. 477R.

[21] Cal. State Bar Formal Opn. No. 2010-179, 2012-184; See also ABA Formal Opn. No. 477R; Wendy Wen Yun Chang, What Are the Ethical Implications of Artificial Intelligence Use in Legal Practice?, 33 Law. Man. Prof. Conduct 284 (Bloomberg BNA May 2017).