The Centre for Law, Technology and Society is delighted to announce that on October 7th, 2022,Katie Szilagyisuccessfully defended her PhD in Law thesis titled “Artificial Intelligence & the Machine-ation of the Rule of Law”, written under the supervision of the late Professor Ian Kerr, CLTS Faculty member.
Katie Szilagyi holds a BSc in Biosystems Engineering from the University of Manitoba; a JD from the University of Ottawa with joint specializations in International Law, and Law and Technology; and an LLM in Law and Technology from Tel Aviv University. After completing her JD, she clerked at the Federal Court of Appeal for Justice Marc Nadon and Justice Wyman W. Webb. Thereafter, she spent a couple of years working as a commercial litigator at a large Toronto law firm and a couple of years travelling the world solo. An avid moot court competitor during her law school career, she now coaches uOttawa moot court students in the Intellectual Property Advocacy programme. In 2019, she was a Global Fellow of the Institute of Technology & Society of Rio de Janeiro.
Dr. Szilagyi’s internal examiners were Professor Jane Bailey, Dr. Jason Millar, and Dr. Peter Oliver; Dr. Carys J Craig was an external examiner; and Dr. Céline Castets-Renard acted as the chair of thesis defence committee.
Abstract
In this dissertation, I argue that the Rule of Law is made vulnerable by technological innovations in artificial intelligence (AI) and machine learning (ML) that take power previously delegated to legal decision-makers and put it in the hands of machines. I assert that we need to interrogate the potential impacts of AI and ML in law: without careful scrutiny, AI and ML's wide-ranging impacts might erode certain fundamental ideals. Our constitutional democratic framework is dependent upon the Rule of Law: upon a contiguous narrative thread linking past legal decisions to our future lives. Yet, incursions by AI and ML into legal process - including algorithms and automation; profiling and prediction - threaten longstanding legal precepts in state law and constraints against abuses of power by private actors. The spectre of AI over the Rule of Law is most apparent in proposals for "self-driving laws," or the idea that we might someday soon regulate society entirely by machine.
Some academics have posited an approaching "legal singularity," in which the entire corpus of legal knowledge would be viewed as a complete data set, thereby rendering uncertainty obsolete. Such "regulation by machine" advocates would then employ ML approaches on this legal data set to refine and improve the law. In my view, such proposals miss an important point by assuming machines can necessarily outperform humans, without first questioning what such performance entails and whether machines can be meaningfully said to participate in the normative and narrative activities of interpreting and applying the law. Combining insights from three distinct areas of inquiry - legal theory, law as narrative scholarship, and technology law - I develop a taxonomy for analysing Rule of Law problems. This taxonomy is then applied to three different technological approaches powered by AI/ML systems: sentencing software, facial recognition technology, and natural language processing. Ultimately, I seek the first steps towards developing a robust normative framework to prevent a dangerous disruption to the Rule of Law.