There are a few examples where Artificial Intelligence (AI) is the subject of legal pronouncements of various types. A significant legal distinction exists between treating AI software as a “thing” (e.g., as property) and treating AI software as a legal entity. Science fiction, and some product marketing literature, provides a vision of AI as an intelligent decision-making software entity. The reality is that today’s AI systems are decidedly not intelligent thinking machines in any meaningful sense. These AI systems operate largely through heuristics—by detecting patterns in data and using knowledge, rules, and information that have been specifically encoded by people. It is important, however, to emphasize how dependent such machine learning is upon the availability and integrity of data. Humans have developed philosophies for ethical human interactions over thousands of years of recorded history. Formulation of the appropriate ethical considerations for use with, of, and by AI entities is a much more recent development. This motivates the need for assessment of the scope of AI entity recognition in the legal field, and the assessment of the ethical risks this may pose.
In a more widespread category of applications, AI is being used in the implementation processes of the law. The rule of law provides an unstated context for day-to-day activities of ordinary citizens. Laws remain applicable, even when not at the forefront of citizens attention. In modern society, everyone is governed by the law, and uses the tools of the law (e.g., contracts) to conduct their personal and business activities. The law is an accretion of hundreds of years human experience distilled through formalized mechanisms for the adoption or adaption of laws subject to human supervision, explanation, and influence. Whether in public law (e.g., criminal law) or private law (e.g., contracts), the legal system operates through human agents. For a variety of reasons, the human agents of the legal system are increasingly adopting AI technologies. Beyond the extremes of speculative science fiction, information about the scope of adoption of AI technologies in the law rarely reaches mainstream audiences. Adoption of AI within the various roles and processes of the legal system proceeds on an incremental basis. Ordinary citizens are not typically engaged in such decisions; nor notified of the adoption or use of AI systems.
The rise of the Machine Learning (ML) flavor of AI has been fueled by a massive increase in the creation and availability of data on the Internet. Institutions, organizations and societal processes increasingly operate using computers with stored, networked data related in some way to ordinary citizens. The law is one of those domains where, except in particular niches, high-quality, machine-processable data is currently comparatively scarce- with most legal records being unstructured natural language text. The data about ordinary citizens is increasingly being captured, stored and analyzed as big data. This analysis is often performed by ML systems detecting patterns in the data and applying those detection decisions various ways. Data about ordinary citizens is often acquired for one purpose, perhaps even with the user’s consent, or under color of law; but once captured may be subject to other secondary uses and analyses. Ordinary citizens have limited, or no, control over the ways in which they are represented in such data. While the general public may not have much awareness of AI software capabilities, there is evidence of increasing public awareness and concern regarding the large-scale data collection practices which enable AI software [1].
Consider the private applications of AI in law. For more than 20 years, companies have been able to use rules-based AI capturing legal constraints and business policies to help them comply with the law, while meeting their business objectives. More recently, computable contracts (also known as smart contracts) have been developed, particularly in the context of blockchain technologies. These enable automation of legally binding operations through software. These “smart contracts” do not require traditional ML or rules-based, expert system AI functions, but do provide for autonomous execution of legally binding actions. Consumer use of AI systems for legal purposes is also increasing. Most taxpayers would be familiar with rules-based software for tax return preparation and these could be classified as expert systems. There are also simple expert systems—often in the form of chatbots—that provide ordinary citizens with answers to basic legal questions [2]. AI implementations as the tools of private law in everyday use (e.g. contract automation) are becoming more widespread.
Governmental officials of various kinds are increasingly using AI systems to make substantive legal or policy decisions. Often, government agencies have programmed systems that contain a series of rules about when applicants for benefits should be approved and when they should not. These systems often contain automated computer assessments that either entirely prescribe the outcome of the decision, or, at the very least, influence it. Judges are increasingly using software systems that employ AI to provide a score that attempts to quantify a defendant’s risk of reoffending. Although the judge is not bound by these automated risk assessment scores, they are often influential in the judge’s decisions. The census, tax records and a host of other reporting obligations create a flood of citizen created data towards various governmental agencies. Machine generated and collected data concerning citizens (and the national environment), is also routinely being collected and processed by governmental entities. ML technology is used in predictive policing to detect patterns from past crime data in an attempt to predict the location and time of future crime attempts. Governmental databases that contain photos of those who have previously come into contact with the government or law enforcement, provide a data source for facial recognition software.
The vision of AI software as an entity raises the question of whether the law should recognize such an entity as a legal person. While this is a subject of discussion in many countries, individual examples of AI systems have gained some form of legal recognition. As examples: in 2014, Vital, a robot developed by Aging Analytics UK was appointed as a board member of the Hong Kong venture capital firm Deep Knowledge; In 2016, a Finnish company (Tieto) appointed Alicia T, an expert system, as a member of the management team with the capacity to vote; and, in 2017, “Sophia” (a social humanoid robot developed by Hong Kong-based company Hanson Robotics, in collaboration with Google’s parent company Alphabet and SingularityNET) has reportedly received citizenship in Saudi Arabia, and was named the first Innovation Champion of the United Nations Development Program [3]. Non-human legal entities (e.g., corporations) have been previously recognized by the law in most jurisdictions, but these are typically governed by human boards of directors. While humans have had experience with corporations for over a hundred years, legally recognizable AI entities are a rather newer concept, and the norms of ethical behavior for interaction with such entities have yet to be established.
The exponential growth in data over the past decade has impacted the legal industry; both requiring automated solutions for the cost effective and efficient management of the volume and variety of big (legal) data; and, enabling AI techniques based on machine learning for the analysis of that data. Legal innovations enabling the recognition of software as legal entities (e.g., BBLLCs, DAOs) are starting to emerge. The author William Gibson [4], noted that the future is already here, but it’s just not widely distributed. Deployments of AI systems in both public and private law applications are proceeding, niche by niche, as the economics warrant and the effectiveness is demonstrated. Intelligent, or otherwise, robo-advisors, and software entities as counterparties, have already emerged in financial applications. Scaling such AI systems from isolated niches to integrated solutions may be an entrepreneurially attractive value proposition. Typical diffusion curves for technology initially scale rapidly and then slow down. Because the legal system affects everyone, rapidly scaling the pervasiveness of AI in the law, seems a disquieting prospect. AI systems seem to thrive on data about us humans. How much data / visibility do citizens have into the pervasiveness of AI systems deployments?
An extended treatment of this topic is available in a paper presented at the IEEE 4th International Workshop on Applications of Artificial Intelligence in the legal Industry (part of the IEEE Big Data Conference 2020).
Reference
[1] Wright, S. A. (2019, December). Privacy in iot blockchains: with big data comes big responsibility. In 2019 IEEE International Conference on Big Data (Big Data) (pp. 5282-5291). IEEE.
[2] Morgan, J., Paiement, A., Seisenberger, M., Williams, J., & Wyner, A. (2018, December). A Chatbot Framework for the Children’s Legal Centre. In The 31st international conference on Legal Knowledge and Information Systems (JURIX).
[3] Pagallo, U. (2018). Vital, Sophia, and Co.—The quest for the legal personhood of robots. Information, 9(9), 230.
[4]Gibson, W, (1993) https://en.wikiquote.org/wiki/William_Gibson