brand logo

AI can boost legal sector if issues are fixed: Shiranee Tilakawardane J.

25 Feb 2021

An Artificial Intelligence (AI) system provides opportunities for the legal and justice sector to optimise processes and procedures through algorithms, and also acts as an assistive technology for judicial decision-making processes, However, there are concerns regarding its ability to appropriately contextualise circumstances, and also pertaining to the reliability and accuracy of its predictions, but with the proper regulation and the introduction of corrective measures, this evolving technology could potentially address such concerns. These observations were made in an article authored by Supreme Court (SC) Judge (Retd.) Justice (J.) Shiranee Tilakawardane titled “AI in the Legal System: Opportunities and Concerns” which was published in Volume Five of the Journal by the Sri Lanka Judges’ Institute. Legal AI has allowed for lawyers to focus on relevant sections in a contract when carrying out contract reviews, litigation analysis, for use in legal technology patents, and to stop issues from propping up in non-disclosure agreements. In terms of legal AI systems, there are two systems. Legal retrieval systems allow lawyers to search through a database containing details of decided cases and statutes for information. Legal analysis systems allow lawyers to take a set of facts and determine the ramifications of those facts in an area of law. These systems have two types, namely, judgment machines and legal expert systems. Some legal expert systems provide legal information, summarises results, and translates the formal legal language of the content to a more conversational natural language for the response. Other legal expert systems are case based. Still other legal expert systems reveal strategic insights on opposing parties and counsel, tracks records and key decisions of presiding judges, and illuminates trends on case resolutions, findings, and damages. Automated AI processes also allow for due diligence reviews; the preparation of contracts through standardised terms and conditions; to act as a tool that can be set up as a self-service for clients to prepare their own contract; contract management (review a contract database and manage risk); legal operations analysis (assess the efficiency of a firm and assign work to lawyers); litigation analysis (given a set of facts, it can assess and provide a predictor of the success rate of a case); wrongdoing detection (takes predictive coding to the level where it can predict a problem before it even arises); and legal research. Also, AI allows for legal administrative staff to supervise the information being fed to these systems. Further, AI-based systems can monitor matters of ethics such as conflicts of interest. In Mexico, judges and clerks are being advised by the Mexican Expertius system software on whether a plaintiff is eligible for the grant of a pension or not. Issues pertaining to the use of AI-based systems in the legal sector and primary concerns raised by scholars and practitioners in terms of their use, concern bias, the capability of making objective determinations, confidentiality, proprietary knowledge, reliability, accuracy, and the contribution towards stigmatisation, noted Tilakawardane J. In the Wisconsin SC case of Loomis v. Wisconsin, the defendant claimed that the report produced by the AI algorithm violated his right to due process. The algorithm built into a software indicated that the defendant demonstrated a high risk of violence, recidivism, and pre-trial risk, and this indication was in turn taken into account by the judges and resulted in a six-year sentence. The Electronic Privacy Information Centre reported that the algorithms used in cases such as these took into account personal data including information concerning sex, employment status, and age, when recommending a particular sentence. An analytic examination of the assessments (used by courts to determine matters such as bail and release dates) created by the algorithm used in Loomis v. Wisconsin, in terms of 7,000 suspects arrested in Florida, USA, revealed that black defendants were twice as likely to be tagged as repeat offenders than white defendants. Thus, Tilakawardane J. observed that AI-based systems and their coded algorithms should ensure that bias is minimised and that bias is assessed formally if such does arise in some form. This also raises issues pertaining to confidentiality and proprietary knowledge as gaining access to algorithms proves to be problematic due to the fact that because the algorithms are proprietary in nature, the laws of confidentiality and intellectual property impede access over concerns of undue and unfair advantage to companies besides the software developer. “Therefore, there are no means through which the algorithm could be examined for fallibilities, inefficacies, or weaknesses including regression analysis.” Thus, Tilakawardane J. noted that there are serious concerns on whether the algorithms are erroneously coded or whether their development reflects the biases or prejudices of their creators (design and deployment of AI tools and services) and the prejudicial readings of legal and socioeconomic data. In addition, they have not been tested for biases such as race. The other issues are the lack of reliability and accuracy, and the contribution towards the stigmatisation of defendants (as these systems are unable to take into account the context and circumstance within which an offence took place), Tilakawardane J. noted. As far as regulations and corrective measures are concerned, there must be a legislative and regulatory framework on the development, audit, and use of AI tools and services in order to ensure that these algorithms are transparent, certified free of bias, and for the data behind these algorithms to be made available as an open source.


More News..