Introduction
The adoption of artificial intelligence (“AI”) is on the rise, and companies are utilising it for many activities including automation, compliance monitoring, contract review, and contract drafting. Despite the significant opportunities presented by the integration of AI tools in companies, there exist ethical challenges concerning their use, particularly in terms of data privacy, intellectual property (IP) rights, and attorney-client privilege. In-house lawyers must ensure that innovations align with professional ethics
1. Accuracy
Fictional facts can be detrimental to an organization or to a brand. AI systems learn from existing datasets, which may result in biased outputs. In-house lawyers are bound to always maintain competence and confirm the accuracy of every AI output. In 2023 a few lawyers were sanctioned for submitting fake court opinions produced by an AI tool. In the case of Mata v. Avianca Inc., Mata’s lawyers, Peter LoDuca, Steven Schwartz, and the law firm of Levidow (collectively, “Respondents”) submitted fictional judicial opinions with fake quotes and citations created by ChatGPT. They did not review the authorities cited and proceeded to file. Avianca’s lawyers informed the court about the nonexistence of the cited authorities. The court proceeded to research them and found none. The Respondents were asked to file an affidavit attaching the cited cases. During the sanction hearing, it was revealed that ChatGPT was the source of the cited cases. In fact, Steven Schwartz, who prepared the documents, stated that he didn’t believe that ChatGPT could fabricate a case, as that may be unpublished. Consequently, the lawyers were ordered to send copies of the court ruling to their client, Mr. Mata, and to the Judges falsely identified as the author of the fake opinions. They were also ordered to pay a penalty of $5,000.
From the above case, relying wholly on AI without proper verification shows incompetence. In-house lawyers should equip themselves with the knowledge of AI, its strengths and limitations. They should review and correct mistakes.
2. Data Confidentiality
AI systems rely on data that is often sensitive and privileged information. Third parties can have access to confidential information, which risks data breach and unauthorized access. Therefore, in-house lawyers must ensure the AI platform complies with strict data protection standards. Even when it is not clear whether the AI complies with data laws, information given to or uploaded on AI systems particularly GenAI can be done synonymously to maintain confidentiality. Inform and obtain consent from the clients when uploading their data into an AI system to ensure compliance, trust, and competence.
Lawyers have an ethical duty to conduct due diligence on AI systems, control access to ensure compliance with data privacy laws. This duty remains in force even where private platforms promise privacy compliance, as they do not share information outside the organization it emanates.
3. Intellectual property rights
Numerous IP cases on the infringement of IP rights exist, some have been filed against OpenAI, Microsoft and other big techs. Authors have expressed dislike over the non-authorization of the use of their works to train AI systems.
With a plethora of cases emerging against the use of protected works for the training of AI systems, in-house lawyers should expect a change in that aspect. Even fair use may not be enough as a defense.
4. Accountability
Integration of AI into workflows cannot replace professional judgment. Lawyers will be accountable for every decision made with AI assistance. Understanding that AI without proper professional supervision may breach ethical standards, In-house lawyers must maintain human oversight to ensure that legal advice and decisions remain sound and compliant with professional conduct rules.
5. AI Policy
With the limitations of the use of AI, in-house lawyers should ensure that their companies have comprehensive policy or guidelines on the use of AI tools. This will help curtail overuse and eradicate breach in any form.
Integrating AI requires a balance between innovation and professionalism. In-house lawyers should pledge commitment to maintaining competence and upholding ethical standards upon which the legal profession is built.