Artificial intelligence, legal profession, irrefutable function of reason | Natalia Bialkowska

By Natalia Bialkowska ·

Law360 Canada (May 17, 2024, 11:07 AM EDT) --
Natalia Bialkowska
Natalia Bialkowska
To paraphrase Einstein, mankind invented artificial intelligence, but no mouse would ever construct a mousetrap for its own intellect. Admittedly limited in the use of its own potential, human brains offer a unique function of reason, something accepted in the ever-evolving world of philosophy since the first draft of Aristotle’s Nicomachean Ethics. Even if we approach “reason” through the Nietzschean lenses relatively critical of Aristotelian ethics theory, reason and only reason provides a faculty of intervention and modification, particularly among “the passions” or phrased in everyday terminology, mistakes that we as lawyers can make or pitfalls that we as humans fall into, often simply due to inattentiveness.

This unique function of reason is also something unshared with and never to be attained by artificial intelligence. Exposing governments and businesses to liability, AI systems remain nothing more than automated assumption machines, reliant on pre-programmed conceptions of personal traits, exposed to replicating and amplifying discriminatory tendencies in the logic of our arguments and their conclusions. The proof is in the pudding: when prompted to “write an article about disadvantages of the AI,” the infamous ChatGPT types up a seven-point non-exhaustive list of its own issues, starting with unemployment concerns and bias and fairness issues, adding on ethical dilemmas and excessive dependence, and culminating in economic inequality and increased social isolation.

In response thereto and joining other governments, including the European Union with its Artificial Intelligence Act or the United States and its Stop Discrimination by Algorithms Act, in September 2023, Canada announced the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. The code, as its name reveals, is not mandatory but provides standards on how to develop and use AI systems in a legal and ethical manner. Canada continues to work on the Artificial Intelligence and Data Act, under which businesses — including active legal professionals — will be held liable for the AI activities under their control. This supervision will come in addition to the provincial Rules of Professional Conduct, which are already in place, binding lawyers and paralegals to competence, zealous advocacy and confidentiality.  

While AI might appear as the saviour of time, money and energy, it also remains deprived of human analysis, adaptability to human reality and experience of human connectivity. It certainly might help, especially law students and younger lawyers or paralegals, expand their legal repertoire quicker, but it also might make them appear incompetent when an unverified and inexistent or overruled precedent sneaks into the submitted document. For instance, the mere initial action of inputting a legal prompt requires some, often ignored, level of knowledge and experience.

The quality of the prompt yields the corresponding quality of the resulting answer. Just compare the question of “What are the condominium corporation rights in Alberta?” to “Can a condominium corporation in Alberta refuse making an insurance claim if a flood results due to gross negligence of a unit owner?” After all, turning to AI for a legal draft of your discovery requests or trial brief is like turning to a plastic doll for companionship. It might grant you some temporary comfort and illusion of dreams coming true and job getting done as long as you ignore the obvious: it is not a human you are interacting with and relying on. Your experience remains inherently flawed and limited. And so does the product of your experience.
 
The bluntness of the above-described reality is not meant to suggest that AI cannot be used to people’s, including legal professionals’ advantage. It does suggest, however, that humans should approach with caution any AI performance, from task receipt and understanding to information processing to the final product delivery.

Having personally worked for AI-based legal research companies, I can attest that in all instances, the “final product” as issued by AI, had to and was reviewed by pre-qualified, pre-trained and actively licensed lawyers. Did AI somehow cut down on the company’s overhead costs? It must have. This is why, more and more solo practitioners and small and boutique law firms turn to third-party ad hoc resources of contract lawyers, and not Westlaw or LexisNexis subscriptions, to prepare legal memoranda full of pre-vetted, triple-checked, case-on-point law that can easily be turned into a writing piece full of the zealous advocacy. Similarly, more and more of those contract lawyers might themselves rely on AI algorithms but under the provincial ethics and common-sense business rules, they still must review the product before turning it over as final.

The bottom line is that we, as lawyers or paralegals — whether serving members of the public or other practising members of the law society — are the primary and ultimate reason why our client hires us. It is not our ability to use AI or the ability to review an AI’s draft. It is our own unique function of reason, our kind of lawyering, our human or “people” part that has always been and will remain indispensable to the legal profession.

Natalia Bialkowska is a Canadian lawyer of Polish origin, educated in the United States. She is the founder of NB Law Firm, specializing in Canadian immigration and personal injury law. Based in Toronto, she serves clients speaking English, Polish and Spanish. Contact: 416-550-4746 or natalia@nblawfirm.ca.

The opinions expressed are those of the author and do not reflect the views of the author’s firm, its clients, Law 360 Canada, LexisNexis Canada or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

Interested in writing for us? To learn more about how you can add your voice to Law360 Canada, contact Analysis Editor Peter Carter at peter.carter@lexisnexis.ca or call 647-776-6740.