There are a few examples where Artificial Intelligence (AI) is the subject of legal pronouncements of various types. A significant legal distinction exists between treating AI software as a “thing” (e.g., as property) and treating AI software as a legal entity. Science fiction, and some product marketing literature, provides a vision of AI as an intelligent decision-making software entity. The reality is that today’s AI systems are decidedly not intelligent thinking machines in any meaningful sense. These AI systems operate largely through heuristics—by detecting patterns in data and using knowledge, rules, and information that have been specifically encoded by people. It is important, however, to emphasize how dependent such machine learning is upon the availability and integrity of data. Humans have developed philosophies for ethical human interactions over thousands of years of recorded history. Formulation of the appropriate ethical considerations for use with, of, and by AI entities is a much more recent development. This motivates the need for assessment of the scope of AI entity recognition in the legal field, and the assessment of the ethical risks this may pose.
In a more widespread category of applications, AI is being used in the implementation processes of the law. The rule of law provides an unstated context for day-to-day activities of ordinary citizens. Laws remain applicable, even when not at the forefront of citizens attention. In modern society, everyone is governed by the law, and uses the tools of the law (e.g., contracts) to conduct their personal and business activities. The law is an accretion of hundreds of years human experience distilled through formalized mechanisms for the adoption or adaption of laws subject to human supervision, explanation, and influence. Whether in public law (e.g., criminal law) or private law (e.g., contracts), the legal system operates through human agents. For a variety of reasons, the human agents of the legal system are increasingly adopting AI technologies. Beyond the extremes of speculative science fiction, information about the scope of adoption of AI technologies in the law rarely reaches mainstream audiences. Adoption of AI within the various roles and processes of the legal system proceeds on an incremental basis. Ordinary citizens are not typically engaged in such decisions; nor notified of the adoption or use of AI systems.
The rise of the Machine Learning (ML) flavor of AI has been fueled by a massive increase in the creation and availability of data on the Internet. Institutions, organizations and societal processes increasingly operate using computers with stored, networked data related in some way to ordinary citizens. The law is one of those domains where, except in particular niches, high-quality, machine-processable data is currently comparatively scarce- with most legal records being unstructured natural language text. The data about ordinary citizens is increasingly being captured, stored and analyzed as big data. This analysis is often performed by ML systems detecting patterns in the data and applying those detection decisions various ways. Data about ordinary citizens is often acquired for one purpose, perhaps even with the user’s consent, or under color of law; but once captured may be subject to other secondary uses and analyses. Ordinary citizens have limited, or no, control over the ways in which they are represented in such data. While the general public may not have much awareness of AI software capabilities, there is evidence of increasing public awareness and concern regarding the large-scale data collection practices which enable AI software [1].
Consider the private applications of AI in law. For more than 20 years, companies have been able to use rules-based AI capturing legal constraints and business policies to help them comply with the law, while meeting their business objectives. More recently, computable contracts (also known as smart contracts) have been developed, particularly in the context of blockchain technologies. These enable automation of legally binding operations through software. These “smart contracts” do not require traditional ML or rules-based, expert system AI functions, but do provide for autonomous execution of legally binding actions. Consumer use of AI systems for legal purposes is also increasing. Most taxpayers would be familiar with rules-based software for tax return preparation and these could be classified as expert systems. There are also simple expert systems—often in the form of chatbots—that provide ordinary citizens with answers to basic legal questions [2]. AI implementations as the tools of private law in everyday use (e.g. contract automation) are becoming more widespread.
Governmental officials of various kinds are increasingly using AI systems to make substantive legal or policy decisions. Often, government agencies have programmed systems that contain a series of rules about when applicants for benefits should be approved and when they should not. These systems often contain automated computer assessments that either entirely prescribe the outcome of the decision, or, at the very least, influence it. Judges are increasingly using software systems that employ AI to provide a score that attempts to quantify a defendant’s risk of reoffending. Although the judge is not bound by these automated risk assessment scores, they are often influential in the judge’s decisions. The census, tax records and a host of other reporting obligations create a flood of citizen created data towards various governmental agencies. Machine generated and collected data concerning citizens (and the national environment), is also routinely being collected and processed by governmental entities. ML technology is used in predictive policing to detect patterns from past crime data in an attempt to predict the location and time of future crime attempts. Governmental databases that contain photos of those who have previously come into contact with the government or law enforcement, provide a data source for facial recognition software.
The vision of AI software as an entity raises the question of whether the law should recognize such an entity as a legal person. While this is a subject of discussion in many countries, individual examples of AI systems have gained some form of legal recognition. As examples: in 2014, Vital, a robot developed by Aging Analytics UK was appointed as a board member of the Hong Kong venture capital firm Deep Knowledge; In 2016, a Finnish company (Tieto) appointed Alicia T, an expert system, as a member of the management team with the capacity to vote; and, in 2017, “Sophia” (a social humanoid robot developed by Hong Kong-based company Hanson Robotics, in collaboration with Google’s parent company Alphabet and SingularityNET) has reportedly received citizenship in Saudi Arabia, and was named the first Innovation Champion of the United Nations Development Program [3]. Non-human legal entities (e.g., corporations) have been previously recognized by the law in most jurisdictions, but these are typically governed by human boards of directors. While humans have had experience with corporations for over a hundred years, legally recognizable AI entities are a rather newer concept, and the norms of ethical behavior for interaction with such entities have yet to be established.
The exponential growth in data over the past decade has impacted the legal industry; both requiring automated solutions for the cost effective and efficient management of the volume and variety of big (legal) data; and, enabling AI techniques based on machine learning for the analysis of that data. Legal innovations enabling the recognition of software as legal entities (e.g., BBLLCs, DAOs) are starting to emerge. The author William Gibson [4], noted that the future is already here, but it’s just not widely distributed. Deployments of AI systems in both public and private law applications are proceeding, niche by niche, as the economics warrant and the effectiveness is demonstrated. Intelligent, or otherwise, robo-advisors, and software entities as counterparties, have already emerged in financial applications. Scaling such AI systems from isolated niches to integrated solutions may be an entrepreneurially attractive value proposition. Typical diffusion curves for technology initially scale rapidly and then slow down. Because the legal system affects everyone, rapidly scaling the pervasiveness of AI in the law, seems a disquieting prospect. AI systems seem to thrive on data about us humans. How much data / visibility do citizens have into the pervasiveness of AI systems deployments?
An extended treatment of this topic is available in a paper presented at the IEEE 4th International Workshop on Applications of Artificial Intelligence in the legal Industry (part of the IEEE Big Data Conference 2020).
Reference
[1] Wright, S. A. (2019, December). Privacy in iot blockchains: with big data comes big responsibility. In 2019 IEEE International Conference on Big Data (Big Data) (pp. 5282-5291). IEEE.
[2] Morgan, J., Paiement, A., Seisenberger, M., Williams, J., & Wyner, A. (2018, December). A Chatbot Framework for the Children’s Legal Centre. In The 31st international conference on Legal Knowledge and Information Systems (JURIX).
[3] Pagallo, U. (2018). Vital, Sophia, and Co.—The quest for the legal personhood of robots. Information, 9(9), 230.
Blockchains are an interesting new technology emerging into large scale commercial deployments for a variety of different applications. While cryptocurrencies were the initial application, the development of smart contracts has enabled a broader variety of transactions on blockchains. Financial transactions using blockchain smart contracts have become a significant element of broader transformations in the financial service industry. “FINTECH” refers generally to the broader transformation of financial services by technology solutions. “DeFi” refers to a more specific, though perhaps less widely supported transformation towards Decentralized Financial services using permissionless (or public) blockchains.
Smart contracts automate transactions of cryptocurrencies and other tokenized digital assets between account holders. Smart contracts execute on the blockchain; and use the block to maintain transaction state information. Some smart contracts may also use oracles to interact with cyber-physical resources, off chain computing resources or other information sources. Not every smart contract is required to have legal significance, but generally this is required for financial transactions above a certain size so that legal recognition and enforcement of the financial transaction can be available.
The scope of a legal contract is a fundamental factor in any legal contractual dispute. Generally, disputes over contract scope center around whether the contract is completely contained in a single document, or whether there are additional contractual terms captured elsewhere. An analogous problem exists in the context of smart contracts as to the scope of the agreement. The academic literature has recognized a continuum of solutions between two extremes (a) the code is the contract vs (b) the code is an implementation of a separate legal document. In practice, not all the terms and clauses of a typical legal contract are executable by a smart contract, hence intermediate solutions are desirable. Intermediate solutions include (i) the annotation of code with legal terms that are not executable by the smart contract, or (ii) the annotation of traditional contractual language to identify terms that might be computable by a smart contract. It may be easier to think of these intermediate solutions as targeting different types of users. Type (i) smart contracts might be of particular interested to software developers operating in a relatively fixed legal environment. Type (ii) smart contracts might be considered of particular relevance to lawyers and other non- software developers that are interested in focusing on the terms and clauses without being so concerned about the software implementation mechanics. Templated legal contracts have previously been used for contract automation, and this approach also applies for type (ii) smart contracts.
Given the dissonance in practical implementations between the scope of the legal contract and the corresponding executable smart contracts, it becomes interesting to consider how to measure the gap between these entities. Clack (2018) considered comparing the semantic differences, but legal prose and computer source code are recorded with vastly different levels of precision, making this approach difficult. The ISDA (2018) whitepaper, in considering the automation of derivates contracts via smart contracts, distinguishes between the contract terms that should or could be automated within the overall scope of terms in derivative contracts. Templated contracts provide a mechanism to identify the specific elements of the contract which are computationally significant. Some comparison of the quantity of computable terms within a contract compared to the size of the overall contract may therefore provide a useful perspective on the degree of automation of the contract.
There are a number of tools and methods for capturing templated contracts. Similarly, a variety of languages exist for encoding the corresponding smart contract. Several of these are even available, to varying degrees, in open source, thus enabling easier access for study. Project Accord[1], hosted by the Linux Foundation, is one such open source project providing tools and templates for smart legal contracts. In particular, this project provides[2] (as of 9/1/2020) a repository with 51 examples of contract text with corresponding data structures for the data fields that are computable within the smart contracts. The figure below provides plots of various measures of the size of the legal clause or contract (# words, # sentences, # paragraphs, # clauses) vs counts of the number of data fields that were templated with a linear trendline for reference. These plots show an increasing trend of number of templated terms with the size of the contract. But the number of data fields templated as rather low in comparison to the size of the contracts with (on average) <5 per legal clause, <2.5 per paragraph, < 1.5 per sentence, and overall <10% of words were templated as data fields.
This corpus may be rather small; a larger corpus may provide for more statistical rigor. This corpus is also intended as exemplary; it is an aid to illustrate the operation, and feasibility, of the smart contract functionality provided by accord. In that sense, it may not be representative of commercial smart contracts in operation on various blockchains.
If we consider the templated data fields as being the terms that should be automated in ISDA’s parlance, having only 10% of the words in this category would seem to indicate that a relatively low degree of automation at present. This may be indicative of the current state of the technology, where the easier use cases are automated first. It would be interesting to better understand the limits of the contract terms that could be automated via smart contracts. Some grammatical constructs of natural language (e.g., “the”, “of” may have no 1:1 semantic equivalent in computation. Some typical legal clauses (e.g., choice of law, choice of venue) may require action by parties off the blockchain that are difficult to automate in a smart contract. Hence, the limits of how many words in a legal contract may have computational significance in a smart contract may be less than 100%.
References
Clack, C. D. (2018). Smart Contract Templates: legal semantics and code validation. Journal of Digital Banking, 2(4), 338-352.
The development of telegraphy brought about the same types of dispute that occur in the era of the internet, and judges were required then, as now, to adapt old laws to new technologies. Exchanges of telegrams generally held to provide evidence of intent to contract within the relevant statute of frauds. Documents sent by facsimile transmission have also generally been accepted in common law jurisdictions. There have been cases where judges have implied that a document has been signed even in the absence of a manuscript signature, where there is sufficient evidence to show the person signing the document adopted the content. Many bureaucratic processes have forms requiring signatures, where there may be no legal authority or requirement for a signature (see e.g. Wynn & Israel 2018). Manuscript signatures have been forged. To test both the validity and the effectiveness of a manuscript signature, various jurisdictions have required the signatures on some classes of documents (e.g., wills, land conveyances, high value contracts, etc.) to be affixed in the presence of a witness or an authorized official, such as a notary. The function of the witness or notary is to provide additional evidence assuring the validity of the signature on the document. This generally requires that witnesses or notaries be independent and have no conflicts of interest (e.g., medical staff witnessing patient legal documents (Starr 2016)). Witnesses and notarial acts are generally required to authenticate a signature for legal purposes.
Sometimes there is a legal need for some form of official to authenticate a document, e.g., for a document to be accepted in some foreign legal processes. Georgia has two separate and distinct state agencies authorized[1] to authenticate documents for use by foreign countries etc. If the document authentication is going to a foreign country that is a participant in Hague Treaty Convention then obtain an Apostille via Georgia Superior Court Clerks. If the document authentication is going to a foreign country that is not a participant in Hague Treaty Convention then authenticate via the Georgia Secretary of State with a Great Seal Certification. For domestic use, a notarial act with Certification of notary public. (Certificate of authority from County court that Notary is authorized by the county) is generally sufficient.
Signatures for legal entities
Legal entities (Companies, LLCs, partnerships, etc.) have many of the same rights and obligations of natural persons that may include a need for a signature e.g., to enter into binding contracts, to endorse government required documents. When signing for the corporation, a simple signature line with the name of the corporate officer is not the legally acceptable method for signature; instead, the signature must be presented in a signature block with the name of the corporation and the name, title and signature of a corporate officer.
Legal entities can have their own signatures distinct from a natural person. Most states have enabled a Company Seal in their statutes enabling corporations. Generally, the Company Seal acts as a signature executing a document on behalf of the corporation, but see
The seal of the corporation may be affixed to any document executed by the corporation, but the absence of the seal shall not itself impair the validity of the document or of any action taken in pursuance thereof or in reliance thereon. .
(O.C.G.A. 14-3-846 (c))
In many cases the Company Seal alone is insufficient evidence for company endorsement actions. Execution of instruments conveying interest in real property or releasing security agreements requires natural person signatures, often two signatures (e.g., Company President and Company Secretary). The Company Seal is not necessary. (O.C.G.A. 14-5-7 (2010))
When a corporation is doing business, it must duly authorize each transaction. Entering contracts, concluding loans and endorsing checks or drafts all require the signature of a corporate officer with the authority to conduct business transactions on behalf of the company. Determining what constitutes a legal signature for a corporation may involve reading the bylaws, securing a board resolution or requesting some other certification of authority. Board resolutions also may give general authority to act on behalf of the corporation or more limited powers to transact business. (e.g., a schedule of authorizations). In some instances, illegal signatures will bind a corporation to protect the interests of innocent third parties. Apparent authority may also exist if two officers of the same corporation, such as the secretary and president, endorse an instrument. Bylaws and board resolutions are not filed with the Secretary of State in the state of incorporation, so you may have to request a copy from the corporation itself.
Notarial acts on signatures
For Notary Law in Georgia, see generally, O.C.G.A. 45-17-1 (2010). Notaries Public have authority anywhere within the State of Georgia to:
Witness or attest signature or execution of deeds and other written instruments;
Take acknowledgments;
Administer oaths and affirmations in all matters incidental to their duties as commercial officers and all other oaths and affirmations which are not by law required to be administered by a particular officer;
Witness affidavits upon oath or affirmation;
Take verifications upon oath or affirmation;
Make certified copies, provided that the document presented for copying is an original document and is neither a public record nor a publicly recorded document, certified copies of which are available from an official source other than a notary; and provided that the document was photocopied under supervision of the notary; and
Perform such other acts as notaries are authorized to perform by the laws of the State of Georgia.
A “notarial act” means any act that a notary is authorized to perform and includes, without limitation, attestation, the taking of an acknowledgment, the administration of an oath or affirmation, the taking of a verification upon an oath or affirmation, and the certification of a copy. All notarial acts must be accompanied by the Notary’s seal. “Attesting” and “attestation” are synonymous and mean the notarial act of witnessing or attesting a signature or execution of a deed or other written instrument, where such notarial act does not involve the taking of an acknowledgment, the administering of an oath or affirmation, the taking of a verification, or the certification of a copy. “Notarial certificate” means the notary’s documentation of a notarial act.
Notaries are commissioned for a four-year term by the Clerk of Superior Court in their county. Requirements are generally age, legal residency of the state (or bordering state, but employed in GA), be able to read and write English. The process[2] in GA requires an application form, fee, oath of office, notary seal.
The advent of COVID-19 has accelerated a trend towards electronic signatures and records. Groups like the Electronic Signature and Records Association[3] have been promoting uniform statutes and rules in areas such as eNotary, IRS eSignature policy, Electronic Will legislation, Blockchain, etc. Remote Online Notaries (RON) have been enabled in many states[4] as a result of the COVID-19 outbreak. This authorization may be via RON legislation or emergency authorizations by State Authorities. For example, in Arizona, ARS 44-7011 allows completion of notarial act on an electronic message or document without imprint of notary seal. In Georgia, the Executive Order[5] by Governor Kemp (dated 3/31/2020) authorizes use of real time audiovisual communications for notarizations.
Legislative activity is often broadly framed so as to enable a variety of technical and commercial solutions to evolve. Public Key cryptography has emerged as a technology with widespread commercial adoption. Other technologies like blockchains build on public key cryptography to enable larger systems. Cryptography research also continues to enable new capabilities in fields like security and privacy; but also more specifically relevant to signature process roles like witnesses and notaries.
References
Starr, K. T. (2016). Should you witness a signature on a patient’s personal legal document?. Nursing2019, 46(12), 14.
Wynn, L. L., & Israel, M. (2018). The fetishes of consent: Signatures, paper, and writing in research ethics review. American Anthropologist, 120(4), 795-806.
Smart contracts and other blockchain transactions (e.g., transfers of assets represented by digital tokens) are purported to have legal significance. That legal significance hinges on assent by the parties usually captured in a contract signature. Blockchains, distributed ledgers, smart contracts and similar technologies rely on cryptographic signatures for authentication and authorization of account transactions. Cryptographic keys are often considered as signatures representing account holder identities. Technologists speak of “signing” documents with keys in public key cryptography. These cryptographic signature operations do not always have all the same characteristics as traditional manuscript signatures.
The function of a signature is generally determined by the nature and content of the document to which it is affixed (e.g., indicating agreement or endorsement of the material above the signature). Historically, manuscript signatures have included a variety of formats – handwritten names, initials, symbol (e.g., X). In legal context, a signature can provide a number of functions – primary evidential functions, secondary evidential functions, cautionary functions, protective functions, channeling functions, record keeping functions (Mason 2016, Ch 1). Evidence relevant to manuscript signatures includes identity of person affixing the signature, intention to authenticate and adopt the document. For legal entities (partnerships, corporations etc.), the scope of signature authority can also be a significant factor. Several legal defenses may be applicable to manuscript signatures: forgery, conditionality, misrepresentation, not an act of the person signing, mental incapacity, mistake, document altered after signature, person signing did not realize the document had legal significance, and other defenses based on unreasonableness or unfairness.
The ESIGN Act (2000) established the general validity of electronic signatures, and electronic contracts. A definition for an electronic signature is provided as:
The term “electronic signature” means an electronic sound, symbol, or process, attached to or logically associated with a contract or other record and executed or adopted by a person with the intent to sign the record. (15 USC 7006 (5))
An example of an electronic signature is a biometric signature. A biometric signature is a binary coded representation of a person’s biometric characteristics used for authentication purposes in distributed computing systems (Bromme 2003). Another example definition of a type of electronic signature is the S-signature from the USPTO:
(d)(2) S-signature. An S-signature is a signature inserted between forward slash marks, but not a handwritten signature … (i)The S-signature must consist only of letters, or Arabic numerals, or both, with appropriate spaces and commas, periods, apostrophes, or hyphens for punctuation… (e.g., /Dr. James T. Jones, Jr./)..
(iii) The signer’s name must be:
(A) Presented in printed or typed form preferably immediately below or adjacent the S-signature, and
(B) Reasonably specific enough so that the identity of the signer can be readily recognized. (37 CFR Sec. 1.4)
Other Federal regulations (e.g., CFTC regulations, see 17 CFR Part 1 Sec. 1.3)havevery similar definitions for electronic signatures to the ESIGN Act. The UETA electronic signature definition (Uniform Law Commission 2019, Sect 2 (7) (8)) as enacted by most states also has very similar language to the ESIGN Act (see e.g., Georgia’s O.C.G.A. 10-12-2 (2010)). Arizona, Nevada, Tennessee, however, have amended their UETA statutes to incorporate blockchain and smart contracts (see e.g. A.R.S. 44-7061). The Uniform Law Commission guidance (2019b) considered this redundant and subject to preemption by the federal act. Beyond electronic signatures, the FDA regulation identify and distinguish a digital signature:
(5) Digital signature means an electronic signature based upon cryptographic methods of originator authentication, computed by using a set of rules and a set of parameters such that the identity of the signer and the integrity of the data can be verified.
(7) Electronic signature means a computer data compilation of any symbol or series of symbols executed, adopted, or authorized by an individual to be the legally binding equivalent of the individual’s handwritten signature. (21 CFR Sec. 11.3)
Internationally, this distinction between electronic and digital signatures is also captured in UNCITRAL’s Model Law on Electronic Signatures (United Nations 2001) and its associated guide which examined various electronic signature techniques that purported to provide functional equivalents to (a) handwritten signatures; and (b) other kinds of authentication mechanisms used in a paper-based environment (e.g. seals or stamps). Electronic signatures were categorized into digital signatures based on public key cryptography and other electronic signature mechanisms (e.g., biometrics, PINs, clicking an acknowledgement box, etc.). NIST’s Digital Signature Standard (Barker 2013) defines a digital signature algorithm based on the work of (Rivest, Shamir and Adelman 1978). ANSI (1998) also defines an algorithm based on Elliptic curve cryptography. Other jurisdictions have similar standards developed by other standards bodies (e.g., ETSI, ISO).
Use of cryptography for authentication purposes by producing a digital signature does not necessarily imply the use of cryptography to make any information confidential, since the encrypted digital signature may be merely appended to a non-encrypted message. A “hash function”, is used in both creating and verifying a digital signature. A hash function is a mathematical process, based on an algorithm which creates a standard length, compressed, substantially unique, digital representation (often referred to as a “message digest”, “hash value” or “hash result) of the message. The most common algorithms for encryption are based on an important feature of large prime numbers: once they are multiplied together to produce a new number, it is particularly difficult and time-consuming to determine which two prime numbers created that new, larger number. The cryptographic algorithms such as RSA or Elliptic curves have no publicly-known methods for rapid decryption of the keys. Brute force approaches relying on massive computation resources become more feasible with technology trends (e.g., Moore’s law) reducing the cost of computing and the commercial availability of massive cloud computing resources (e.g., Microsoft Azure, Amazon EC2 etc.). Quantum computing developments also threaten to undermine these algorithms. Hence there has been recent interest in improved (“Post Quantum”) cryptographic algorithms (see e.g., Alagic et al 2019).
Cryptographic signatures, then, are the basis that blockchains and smart contracts rely on for asserting the legal significance of transactions binding parties to blockchain transactions. Beyond the creation of a signature it is the operations and processes around those cryptographic signatures (in contrast to the operations and processes around manuscript signatures) that sustains any legal significance to these bits of information.
References
Alagic, G., Alperin-Sheriff, J. M., Apon, D. C., Cooper, D. A., Dang, Q. H., Miller, C. A., … Robinson, A. Y. (2019). Status Report on the First Round of the NIST Post-Quantum Cryptography Standardization Process. NIST Interagency/Internal Report (NISTIR) – 8240.
ANSI, (1998). X. 63: Public Key Cryptography for the Financial Services Industry, Key Agreement and Key Transport Using Elliptic Curve Cryptography. American National Standards Institute.
Barker, E. B. (2013). Digital Signature Standard (DSS). Federal Inf. Process. Stds. (NIST FIPS) – 186-4.
Bromme, A. (2003, July). A classification of biometric signatures. In 2003 International Conference on Multimedia and Expo. ICME’03. Proceedings (Cat. No. 03TH8698) (Vol. 3, pp. III-17). IEEE.
Mason, S. (2016). Electronic signatures in law . University of London Press.
Rivest, R. L., Shamir, A., & Adleman, L. (1978). A method for obtaining digital signatures and public-key cryptosystems. Communications of the ACM, 21(2), 120–126.
Buterin’s white paper [Buterin 2014] described smart contracts as “systems which automatically move digital assets according to arbitrary pre-specified rules”. [ISO 2019] defined an “asset” anything that has value to a stakeholder and a “digital asset” as one that that exists only in digital form or which is the digital representation of another asset. Similarly, a “Token” digital asset as representing a collection of entitlements. The tokenization of assets refers to the process of issuing a blockchain token (specifically, a security token) that digitally represents a real tradable asset—in many ways similar to the traditional process of securitization [Deloitte 2018]. A security token could thus represent an ownership share in a company, real estate, or other financial asset. These security tokens can then be traded on a secondary market. A security token is also capable of having the token-holder’s rights embedded directly onto the token, and immutably recorded on blockchain. Asset administration using smart contracts is emerging as a viable mechanism for digitized or tokenized assets though smart contracts may also have other purposes.
Recall that smart contracts started as an enhancement providing a programmable virtual machine in the context of blockchains, and the initial applications of blockchains were cryptocurrencies. Cryptocurrencies have been recognized as commodities for the purpose of regulations on derivatives like options and futures. High-value smart contracts on cryptocurrency derivatives require substantial legal protection and often utilize standardized legal documentation provided by the International Swaps and Derivatives Association (ISDA). Smart contracts managing cryptocurrency derivatives aim to automate many aspects of the provisions of the ISDA legal documentation [Clack 2019]. There have been a number of efforts to extend blockchains and smart contracts beyond cryptocurrency applications to manage other types of assets. Initially these were custom dApps, but as interest in smart contracts for specific types of assets grew, then a corresponding interest developed in having common token representations for particular types of asset, enabling broader interoperability, and reducing custom legal and development risks and costs. Rather than having specialized blockchains for supply chain provenance and other for smart derivatives contracts, different tokens representing those asset classes can be managed by a smart contract independently of the underlying blockchain technology.
Not all tokens are intrinsically valuable, many derive their value by reference from some underlying asset. [Bartoletti 2017] classifies smart contracts by application domain as financial, notary (leveraging the immutability of the blockchain to memorialize some data), games (of skill or chance), wallet (managing accounts, sometimes with multiple owners), library (for general purpose operations e.g. math and string transformations) and unclassified (The financial and notary categories had the most contracts.). The notary smart contracts enabled the smart contracts to manage non cryptocurrency assets. [Alharby 2018] classified smart contract literature using a keyword technique into: security, privacy, software engineering, application (e.g. IoT), performance & scalability and other smart contract related topics. The application domains were identified as: Internet of Thing (IoT), cloud computing, financial, data (e.g., data sharing, data management, data indexing, data integrity check, data provenance), healthcare, access control & authentication and other applications. [Rouhani 2019] categorized decentralized applications in seven main groups include healthcare, IoT, identity management, record keeping, supply chain, BPM, and voting. However, the blockchain-based applications are not limited to these groups. Keywords and application identification, provide a view of the breadth of applications, but these are not exclusive or finite categories – new applications or keywords can always be developed extending those lists. Token based music platforms have been proposed [Mouloud 2019]. Networked digital sharing economy services enable the effective and efficient sharing of vehicles, housing, and everyday objects utilizes a blockchain ledger and smart contracting technologies to improve peer trust and limit the number of required intermediaries, respectively [Fedosov 2018]. The tokenization of sustainable infrastructure can address some of the fundamental challenges in the financing of the asset class, such as lack of liquidity, high transactions costs and limited transparency [Uzoski 2019]. Tokens can also be useful from a privacy perspective. Tokenization, the process of converting a piece of data into a random string of characters known as a token, can be used to protect sensitive data by substituting non-sensitive data. The token serves merely as a reference to the original data; but does not determine those original data values. The advantage of tokens, from a privacy perspective, is that there is no mathematical relationship to the real data that they represent. The real data values cannot be obtained through reversal [Morrow 2019]. If the costs of tokenizing and marketing new asset classes are lower than costs of traditional securities offerings, then this can enable securitization of new asset classes. The ability to subdivide the some types of tokens may enable wider markets through reduced minimum bid sizes.
In 2004, early proposals were made for XML data type definitions to capture electronic contracts [Krishna 2004]. In 2015, the Ethereum developer community adopted ERC-20 (which specifies a common interface for fungible tokens that are divisible and not distinguishable) to ensure interoperability [Vogelsteller 2015]. While the initial token applications may have been for cryptocurrencies, blockchains (especially ethereum) are being applied in a lot of other domains, and so assets administered by the smart contracts are being stretched beyond their original purpose to enable new applications. Studies of trading patterns would need to distinguish whether those tokens were all being used to represent the same kind of asset to be able to make valid inferences about a particular market for that asset. Stretching beyond the fungible cryptocurrencies to enable popular new blockchain applications like tracking supply chain provenance requires a a different kind to token, a non-fungible token. Non-fungible tokens (NFTs) are a new type of unique and indivisible blockchain-based tokens introduced in late 2017 [Regner 2019]. The Ethereum community in 2018 adopted ERC-721 which extends the common interface for tokens by additional functions to ensure that tokens based on it are distinctly non-fungible and thus unique. [Entriken 2018].
In 2018, [FINMA 2018] identified 3 classes of tokens – payment tokens, asset tokens and utility tokens. A utility token as intended to provide access digitally to an application or service, by means of a blockchain-based infrastructure. This may include the ability to exchange the token for the service.
Token functions – payment
{Utility, asset, yield}
Token features – stake rewards
sole medium of exchange
Token distribution –
Initial drops and reservations for miners and service providers
In 2019, [Hong 2019] proposed a non-fungible token structure for use in hyperledger, and [Cai 2019] proposed universal token structure for use in token based blockchain technology. Also in 2019, an industry group, the Token Taxonomy Initiative, proposed a Token Taxonomy Framework [TTI 2019] in an effort to model existing and define new business models based on it. TTI defines a token as a representation of some shared value that is either intrinsically digital or a digital receipt or title for some material item or property and distinguishes that from a wallet which is a repository of tokens attributed to an owner in one or more accounts. TTI classifies tokens based on five characteristics they possess: token type (fungible or not), token unit (fractional, whole, singleton), value type (intrinsic or reference), representation type (common or unique), and template type (Single or hybrid). The base token types are augmented with behaviors and properties captured in a syntax (the token formula). Particular token formulae would be suitable for different business model applications – e.g. loyalty tokens, supply chain SKU tokens, securitized real property tokens, etc. This Token Taxonomy Framework subsumes the functions, features and distribution aspects of the FINMA token model, enabling those regulatory perspectives as well as other properties of particular value in enabling different token-based business models.
The immutable, public Ethereum blockchain enables study of the trading patterns in ERC-20 tokens, revealing trading networks that display strong power-law properties (coinciding with current network theory expectations) [Soumin 2018]. Even though the entire network of token transfers has been claimed to follow a power-law in its degree distribution, many individual token networks do not: they are frequently dominated by a single hub and spoke pattern. When considering initial token recipients and path distances to exchanges, a large part of the activity is directed towards these central instances, but many owners never transfer their tokens at all [Victor 2019]. There is strong evidence of a positive relationship between the price of ether and price of the blockchain tokens. Token function does impact token price, over a time period that spans both boom and bust. The designed connection can be effective; linking a project that has a value, with a token that has a price, specifically in the absence of a legal connection or claim [Lo 2019]. From these preliminary studies, tokens seem to exhibit some of the trading properties of regular securities. Many of these initial tokens have no or untested legal connections to the underlying assets. While consistent behavior in boom and bust is important for an investment, from a legal perspective, the predictability of outcomes for asset recovery during more stressful events (e.g. bankruptcy) may be more important.
A point of concern is understanding how tokens representing value will remain linked to the real asset that they represent. For example, imagine if you own tokens representing a small fraction of a set of gold coins at a bank, and some coins are stolen. Or the reverse – who owns the gold coins if the account keys are lost or the token destroyed? Being able to rationally predict what happens to your token and to the other token owners is crucially important, since the value of tokens becomes greatly undermined if they cannot be proven to be linked to real-world assets [Deloitte 2018]. In these types of cases, off chain enforcement action is required. A typical legal tool for representing such interests in real assets would be recording security interests and liens in real property under the UCC. One approach would be to update the lien recordings for the new owner after each transaction. There are at least two difficulties with this approach. First, the smart contract of today may not be able to interact with manual off-chain legal recordation processes for security interests. Secondly, if the purpose of tokenizing the asset was to increase liquidity, frequent transactions may result in a high volume of updates overloading off-chain manual recordation processes. Another approach would be to use some centralized custody agent (similar to physical custody) and have them hold the lien in the public records as a trustee (perhaps keeping account records reflecting updates from transactions on a blockchain). If the smart contract was a legal entity (e.g., a Vermont style BBLLC), then the BBLLC could be the entity with the security interest in the public records directly. However – the smart contract would need to be able to respond to legal actions on the lien; and may incur other obligations when acting as the custodian (e.g., reporting, insurance, licenses, etc.). The asset custodian as a traditional entity vs the BBLLC dApp provides alternatives for consideration. Traditional asset custodians provide an identifiable party from whom reparations can be sought in the event of asset loss or degradation. Asset custodians are commonly held to a fiduciary standard of care. A BBLLC approach emphasizes a digital distributed trust model; BBLLC’s, however, may be challenged with off-chain enforcement actions and physical custody operations (e.g., physical asset audits). BBLLC custodians may require insurance to protect asset owners in the event of asset losses/degradation.
If ownership of an asset, such as a building, is split among thousands of people, there is little incentive for owners to bear the costs associated with that asset, such as maintenance and ensuring rent is collected [Deloitte 2018]. One can certainly imagine a smart contract detecting that rent has not been credit to an account, but what then can be done in terms of off-chain enforcement? While IoT blockchains can enable significant cyberphysical functionality, typical landlord capabilities of self-help and legal dispossessory actions would seem technically difficult or socially problematic. Some classes of contracts requiring off-chain enforcement actions may not be a good fit for complete implementation by dApp smart contracts at this stage; and may still require human physical agents or other legal entities for some actions.
Because the transaction of tokens is completed with smart contracts, certain parts of the exchange process are automated. For some classes of transactions, this automation can reduce the administrative burden involved in buying and selling, with fewer intermediaries needed, leading to not only faster deal execution, but also lower transaction fees [Deloitte 2018]. Elimination of intermediaries sounds financially efficient; eliminating all intermediaries, however, may not be wise for some classes of assets. An intermediate entity may be useful as a liability shield for remote owners. Consider a tokenized mobile asset (e.g., a drone or terrestrial vehicle) owned and operated via a smart contract, which injures another or their property; most remote owners would insist on some limited liability entity or insurance. While smart contract operated vehicles may not be computationally feasible in the short term, even immobile asset classes like real estate can result in liabilities for the owner (e.g., premises slip and fall). The point being that for some set of physical asset classes, the existence of at least one intermediate entity for the purpose of liability shielding may be desirable. The actions of smart contracts on public blockchains may also raise privacy concerns
By tokenizing financial assets—especially private securities or typically illiquid assets—these tokens can be then be traded on a secondary market of the issuer’s choice, enabling greater portfolio diversification, and capital investment in otherwise illiquid assets. Tokenization could open up investment in assets to a much wider audience through reduced minimum investment amounts and periods. Tokens can be highly divisible, meaning investors can purchase tokens that represent incredibly small percentages of the underlying assets. If each order is cheaper and easier to process, it will open the way for a significant reduction of minimum investment amounts [Deloitte 2018]. Token markets to date have often been via exempt ICOs that are restricted to accredited investors, minimizing regulatory filings, etc. Investment minimums are unlikely a major driver for accredited investors, though enabling investment in diverse, but otherwise illiquid asset classes may be of interest for portfolio diversification. Enabling liquidity for mass market investors would require security token investments to meet the necessary higher regulatory standards for filings and disclosures to bring those investments to the larger public markets. Smart contracts offer efficient process automation for trading and other transactions based on tokenized assets. While this can provide market efficiencies, not all asset classes are ready for tokenization without further consideration. Smart contracts may also need to take on additional behaviors to reflect the increased importance of their role in administering assets. Asset administration using smart contracts is emerging as a viable mechanism for digitized or tokenized assets.
References
[Alharby 2018] M. Alharby, et. al., “Blockchain-based smart contracts: A systematic mapping study of academic research (2018).” Proc. Int’l Conf. on Cloud Computing, Big Data and Blockchain. 2018.
[Bartoletti 2017] M. Bartoletti & L. Pompianu. “An empirical analysis of smart contracts: platforms, applications, and design patterns.” International conference on financial cryptography and data security. Springer, Cham, 2017.
[Cai 2019] T.Cai, et al. “Analysis of Blockchain System With Token-Based Bookkeeping Method.” IEEE Access 7 (2019): 50823-50832.
[Clack 2019] C. Clack, & C. McGonagle. “Smart Derivatives Contracts: the ISDA Master Agreement and the automation of payments and deliveries.” arXiv preprint arXiv:1904.01461 (2019).
[Fedosov 2018] A. Fedosov, et. al., “Sharing physical objects using smart contracts.” Proceedings of the 20th Int’l Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct. ACM, 2018.
[Hong 2019] S. Hong, et. al., “Design of Extensible Non-Fungible Token Model in Hyperledger Fabric.” Proc. of the 3rd Workshop on Scalable and Resilient Infrastructures for Distributed Ledgers. ACM, 2019.
[Krishna 2004] P. Krishna, et.al., “An EREC framework for e-contract modeling, enactment and monitoring.” Data & Knowledge Engineering51.1 (2004): 31-58.
[Lo 2019] Y. Lo, et. al., “Assets on the Blockchain: An Empirical Study of Tokenomics.” Available at SSRN 3309686 (2019).
[Migliorini 2019] S. Migliorini, et. al., “The Rise of Enforceable Business Processes from the Hashes of Blockchain-Based Smart Contracts.” Enterprise, Business-Process and Information Systems Modeling. Springer, Cham, 2019. 130-138.
[Morrow 2019] M. Morrow, & M. Zarrebini. “Blockchain and the Tokenization of the Individual: Societal Implications.” Future Internet 11.10 (2019): 220.
[Mouloud 2019] K. Mouloud, “Blockchain in the Music Industry: A Study of Token Based Music Platforms” S. Diss. Aalborg University, 2019.
[Soumin 2018] S. Somin, et. al., “Network analysis of erc20 tokens trading on ethereum blockchain.” International Conference on Complex Systems. Springer, Cham, 2018.
[Victor 2019] F. Victor, & B. Lüders, “Measuring Ethereum-based ERC20 token networks.” International Conference on Financial Cryptography and Data Security. Springer, Cham, 2019.
There is some overlap between the concepts of autonomy in software agents and robots; and another overlap between the autonomy of software agents and DAOs. The concept of autonomous software agents has been around for more than 20 years. In 1996, [Franklin 1996] proposed a taxonomy of autonomous agents (biological, robotic or computational) and defined software agents as a type of computational agent (distinguished from artificial life), but further categorized into task specific agents, entertainment agents or viruses. Note that this defines agents as being autonomous; but does not provide a notion of the level of autonomy. In 1999, [Heckman 1999] reviewed potential liability issues arising from autonomous agent designs. Around the same time, [Barber 1999] quantitatively defined the degree of autonomy as an agent’s relative voting weight in decision-making. Reviewing autonomous agents in 2000, [Dowling 2000] considered the delegation of any task to a software agent as raising questions in relation to its autonomy of action and decision, the degree of trust which can be vested in the outcomes it achieves, and the location of responsibility, both moral and legal, for those outcomes; but did not couple responsibility for decisions with levels of autonomy.
In 2003, [Braynov 2003] considered autonomy a relative concept depending on what a user (or another agent) expects from an agent; defining autonomy as a relation including four constituents: (1) The subject of autonomy: the entity (a single agent or a group of agents), which acts or makes decisions; (2) The object of autonomy. The object of autonomy could be a goal or a task that the subject wants to perform or to achieve. It could also be a decision that the subject wants to make; (3) The affector of autonomy: the entity that has an impact on the subject’s decisions and actions, thereby affecting the final outcome of the subject’s behavior. The affector could be the physical environment, another agent (including the user), or group of agents. The affector could either increase or decrease the autonomy of the subject; (4) Performance measure: a measure of how successful the subject is with respect to the object of autonomy. Around that time, [Brazier 2003] was considering whether agents could close contracts, and if so how liabilities might be allocated. While autonomy was seen as a complex relationship involving decision making, and allocation of liabilities was seen as important, liability was not considered a dimension of autonomy. [Braynov 2003]’s object of autonomy included decision making, it also includes broader topics like goals.
In 2017, [Pagallo 2017] provides a historical perspective on the liability issues arising from automation through to autonomous systems, recognizing that historical approaches may be insufficient for the current challenges of technology. Considering levels of autonomy in agents from an ethical perspective, [Dyrkolbotyn 2017] identified 5 “levels” of autonomy to pinpoint where a morally salient decision belongs on the following scale: (i) Dependence or level 1 autonomy: The behavior of the system was predicted by someone with a capacity to intervene; (ii) Proxy or level 2 autonomy: The behavior of the system should have been predicted by someone with a capacity to intervene; (iii) Representation or level 3 autonomy: The behavior of the system could have been predicted by someone with a capacity to intervene; (iv) Legal personality or level 4 autonomy: The behavior of the system cannot be explained only in terms of the systems design and environment. These are systems whose behavior could not have been predicted by anyone with a capacity to intervene; (v) Legal immunity or level -1: The behavior of the system counts as evidence of a defect. Namely, the behavior of the system could not have been predicted by the system itself, or the machine did not have a capacity to intervene. Also around this time; [Millard 2017] was concerned with the need for clarity concerning liabilities in the IoT space. In this period the need for change in existing liability schemes was starting to be recognized, and the level definition of [Dyrkolbotyn 2017]’s autonomy scale based on decisions included notions of legal consequences of decision making.
In 2019, [Janiesch 2019] surveyed the literature on autonomy in the context of IoT agents identifying 20 category definitions for levels of autonomy, based on whether the human or machine is performing the decision making, and 9 different dimensions or types of agent autonomy: interpretation, know-how, plan, goal, reasoning, monitoring, skill, resource and condition. They also identified 12 design requirements for autonomous agents and proposed an autonomy model and language. [Lebeuf 2019] proposed a definition for software (ro)bots as an interface paradigm including command line, graphical, touch, written/spoken language or some combination of interfaces; and also proposed a faceted taxonomy for software (ro)bots based on facets of (ro)bot environment, intrinsic characteristics and interaction dimensions. The boundaries of these different types of software entities have become blurrier again with less consensus on the meaning of level of autonomy in the context of software entities.
Back in 2003, [Braynov2003] had noted that it was obvious that between the lack of autonomy and the complete autonomy there was a wide range of intermediate states that describe an agent’s ability to act and decide independently. A common thread around decision making as a basis for autonomy levels emerges from [Barber 1999], [Dowling 2000], [Braynov 2003] and [Dyrkolbotyn 2017], but not a consensus on a particular metric. The recent recognition of the potential for changes in the legal liability regimes to better reflect software agents brings to mind [Myhre 2019]’s assertion of accountability as a measure of autonomy. Whether through action or inaction, software agents may potentially cause injuries to people or their property. For purely software agents with no cyberphysical aspects, those injuries would have to be informational in nature (e.g. privacy violations, slander etc.). While liability or accountability for autonomous decision making, may not be the only useful dimension for autonomy in software agents, it does have some practical commercial value in quantifying risks thus enabling more commercial activities based on software agents to proceed.
References
[Barber 1999] S. Barber, & C. Martin, “Agent Autonomy: Specification, Measurement, and Dynamic Adjustment” In Proceedings of the Autonomy Control Software Workshop, Agents ’99, pp. 8-15. May 1-5, 1999, Seattle, WA.
[Braynov 2003] S. Braynov, & H. Hexmoor. “Quantifying relative autonomy in multiagent interaction.” Agent Autonomy. Springer, Boston, MA, 2003. 55-73.
[Dowling 2000] C. Dowling, “Intelligent agents: some ethical issues and dilemmas.” Selected papers from the second Australian Institute conference on Computer ethics. Australian Computer Society, Inc., 2000.
[Dyrkolbotn 2017] S.Dyrkolbotn, et.al., “Classifying the Autonomy and Morality of Artificial Agents.” CARe-MAS@ PRIMA. 2017.
[Franklin 1996] S. Franklin, & A.Graesser. “Is it an Agent, or just a Program?: A Taxonomy for Autonomous Agents.” International Workshop on Agent Theories, Architectures, and Languages. Springer, Berlin, Heidelberg, 1996.
[Heckman 1999] C. Heckman, & J. Wobbrock. “Liability for autonomous agent design.” Autonomous agents and multi-agent systems 2.1 (1999): 87-103.
[Janiesch 2019] C. Janiesch, et al. “Specifying autonomy in the Internet of Things: the autonomy model and notation.” Information Systems and e-Business Management 17.1 (2019): 159-194.
[Lebeuf 2019] C. Lebeuf, et al. “Defining and classifying software bots: a faceted taxonomy.” Proceedings of the 1st International Workshop on Bots in Software Engineering. IEEE Press, 2019.
[Millard 2017] C. Millard, et. al., “Internet of Things Ecosystems: Unpacking Legal Relationships and Liabilities.” 2017 IEEE International Conference on Cloud Engineering (IC2E). IEEE, 2017.
There is some overlap between definitions of autonomous mobile devices and definitions of robots: some devices may meet both definitions, though non-mobile robots would not be covered by the previous definitions. Many of the notions of autonomy for autonomous mobile devices were proposed in terms of mobility tasks or challenges. Autonomy in the context of robots has a number of different interpretations. [Richards2016] defined robots as nonbiological autonomous agents (which defines them as being inherently autonomous) and more specifically asa constructed system that displays both physical and mental agency but is not alive in the biological sense. Considering autonomy in the context of human-robot-interactions, [Beer 2014] proposed a definition for autonomy as: the extent to which a robot can sense its environment, plan based on that environment, and act upon that environment with the intent of reaching some task-specific goal (either given to or created by the robot) without external control. [IEC 2017] similarly defines autonomy as capacity to monitor, generate, select and execute to perform a clinical function with no or limited operator intervention, and proposes guidelines to determine the degree of autonomy. [Huang 2019] similarly asserts that autonomy represents the ability of a system in reacting to changes and uncertainties on the fly. [Luckcuck 2019] defines an autonomous system as an artificially intelligent entity that makes decisions in response to input, independent of human interaction. Robotic systems are physical entities that interact with the physical world. Thus, an autonomous robotic system could be defined as a machine that uses artificial intelligence and has a physical presence in and interacts with the real world. [Antsaklis 2019] proposed as a definition: If a system has the capacity to achieve a set of goals under a set of uncertainties in the system and its environment, by itself, without external intervention, then it will be called autonomous with respect to the set of goals under the set of uncertainties. [Norris 2019] uses the decision-making role to distinguish between automated, semi-Autonomous and Autonomous systems where human operators control automated systems; machines (computers) control autonomous systems; and both are engaged in the control of semi-autonomous systems. As we have seen, when people refer to autonomous systems they often mean different things. The scope of automated devices considered as robots is also quite broad ranging across service devices like vacuum cleaners, social robots and precision surgical robots [Moustris 2011]; common behaviors across this range of robots seems unlikely. Intelligence is already difficult to define and measure in human, let alone artificial intelligence. The concept of a robot seems to be intertwined with the notion of autonomy; but defining autonomy as behavior, or artificial intelligence, does not seem to add much clarity to the nature of autonomy itself. The decision-making role notion of autonomy seems more consistent with dictionary definitions of autonomy based around concepts of “free will”.
There are a wide variety of types of robots intended for different applications including some (not necessarily anthropomorphic) designs intended for more social applications; developing a consistent scale of autonomy across these applications seems difficult. Human social interactions with robots also bring the distinction between objective measurements of autonomy and human perceptions of autonomy (see e.g., [Barlas 2019]). Autonomous robots are being considered in cooperative work arrangements with humans. In such a context the activity coordination between the robot and the human could become much more complex with task leadership passing between the two. (See e.g. [Jiang 2019]). Future controversies over the connotation of social robots is likely to concern their sociality and autonomy rather than their functionality [Sarrica 2019]. [Yang 2017] introduced a 6-level classification of autonomy in the context of medical robotics:
Level 0 – No Autonomy
Level 1 – Robot Assistance
Level 2– Task Autonomy
Level 3 – Conditional Autonomy
Level 4 – High Autonomy
Level 5 – Full Autonomy
This six-level autonomy scale seems reminiscent of the scales proposed for autonomous mobility. While at first glance, it may seem attractive as an autonomy scale, the categories proposed seem rather ambiguous – e.g. full autonomy to one observer, may be merely a single task for another. In contrast, [Ficuciello 2019] separated out a notion of Meaningful Human Control (MHC) in the context of surgical robotics, and proposed a four-level classification:
Level 0 MHC – Surgeons govern in master-slave control mode the entire surgical procedure, from data analysis and planning, to decision-making and actual execution.
Level 1 MHC – humans must have the option to override robotic corrections to their actions, by enacting a second-level human control overriding first-level robotic corrections.
Level 2 MHC – humans select a task that surgical robots perform autonomously.
Level 3 MHC – humans “to select from among different strategies or to approve an autonomously selected strategy
[Beer 2014]’s definition described behaviors that autonomous systems may engage in but does not provide a scale or measurement approach for the degree of autonomy in a particular system. The taxonomies of [Yang 2017], [IEC2017] and [Ficuciello 2019] are defined externally to the autonomous robot system (e.g., in terms of the level of operator oversight). [Ficuciello 2019]’s insight could equally be applied to autonomous mobile devices where a number of the proposed taxonomies could be interpreted as scales of human control rather than device autonomy. [Beer 2014]’s definition based on behavior has the advantage that it is observable; and observation of behavior does not always imply the existence of a rational autonomous decision causing that behavior. [Luckcuck 2019], [Antsaklis 2019] and [Norris 2019] define autonomy in terms of artificial intelligence, goal seeking and decision making. While goals and decisions can be explained if they exist, much of recent technology trends have emphasized artificial intelligence techniques such as machine learning, that are not easily amenable to providing explanations. Articulating goals and decisions across a broad range of robot application domains seems rather difficult.
It is important to be more precise and agree upon a common definition for autonomy. Could [Myhre 2019]’s definition of autonomy be applicable to the broader category of robots? Recall this definition of autonomy requires acceptance of liability, and ideally a quantification of that liability in monetary terms. Mobile robots could incur many of the same types of liabilities of other autonomous mobile devices. Non-mobile robots can’t cause a collision with other people or their property as this category of autonomous robot devices are not moving. But immobility does not prevent other causes of liability. Consider immobile robots intended for social interactions with humans speaking information that other people could hear; this might result in liability for privacy violations, slander, etc. Quantifying these liabilities for interactions between humans is already difficult, but not impossible; hence it is reasonable to expect that autonomous robots could be held to similar liability quantification standards. Across a broad range of application domains, robots could be a cause of injuries of various sorts to humans and their property resulting in potential liability. If a robot is interfacing with the real world it is difficult to envision a scenario where all potential liabilities are impossible. Even a passively sensing robot could potentially result in some liability for privacy violation. Hence the approach of defining and scaling autonomy in terms of the range of acceptable accountability or liability seems applicable to a broad range of robots.
[Beer 2014] J. Beer, et.al., “Toward a framework for levels of robot autonomy in human-robot interaction.” Journal of human-robot interaction 3.2 (2014): 74-99.
[Ficuciello 2019] F. Ficuciello, et al. “Autonomy in surgical robots and its meaningful human control.” Paladyn, Journal of Behavioral Robotics 10.1 (2019): 30-43.
[Huang 2019] S. Huang, et. al., “Dynamic Compensation Framework to Improve the Autonomy of Industrial Robots.” Industrial Robotics-New Paradigms. IntechOpen, 2019.
[IEC 2017] IEC 60601-4-1:2017Medical electrical equipment — Part 4-1: Guidance and interpretation — Medical electrical equipment and medical electrical systems employing a degree of autonomy (2017)
[Jiang 2019] S. Jiang, “A Study of Initiative Decision-Making in Distributed Human-Robot Teams.” 2019 Third IEEE International Conference on Robotic Computing (IRC). IEEE, 2019.
[Moustris 2011] G. Moustris, et al. “Evolution of autonomous and semi‐autonomous robotic surgical systems: a review of the literature.” The international journal of medical robotics and computer assisted surgery 7.4 (2011): 375-392.
[Richards 2016] N. Richards, & W. Smart., “How should the law think about robots?.” Robot law. Edward Elgar Publishing, 2016.
[Sarrica 2019] M. Sarrica, et.al., “How many facets does a “social robot” have? A review of scientific and popular definitions online.” Information Technology & People (2019).
[Yang 2017] G.Yang, et al. “Medical robotics—Regulatory, ethical, and legal considerations for increasing levels of autonomy.” Sci. Robot 2.4 (2017): 8638.
Autonomy is relevant in many different activities; and has recently received a lot of attention in the context of autonomous mobility of cyber-physical devices. A broad range of mobile physical devices are being developed for a variety of different geographic environments (Land, Sea, Air). These devices have various capabilities of internal controls often described with a ‘Levels of Autonomy’ taxonomy. Autonomous mobility is largely concerned with the problems of navigation and avoidance of various hazards. In many cases, these autonomous mobile devices are proposed as potential replacements for human operated vehicles which creates some uncertainty regarding potential liability arrangements when devices have no human operator. The levels of autonomy proposed for autonomous mobility might provide some insight for levels of autonomy in other contexts e.g., DAOs.
Levels of Land Mobile Device Autonomy
Automobile-specific regulations and more general tort liabilities have established a body of law where the driver is in control of the vehicle, but this is challenged by automation culminating in autonomous vehicles [Gasser 2013]. In their more recent review of autonomous vehicles, [Taeihagh 2019] categorized technological risks and reviewed potential regulatory or legislative responses in US and Europe; where the US has been introducing legislations to address autonomous vehicle issues related to privacy and cybersecurity; While the UK and Germany, in particular, have enacted laws to address liability issues; other countries mostly acknowledge these issues, but have yet to implement specific strategies. Autonomous vehicles are expected to displace manually controlled vehicles in various applications becoming an increasingly significant portion of the vehicular traffic on public roads over time. [SAE2018] Defines 6 levels of automation:
Level 0 – No Driver Automation – May provided momentary controls (e.g. ,emergency intervention).
Level 1 – Driver Assistance – Sustained, specific performance of a subtask
Level 2 – Partial Driving Automation – sustained specific execution with the expectation that the driver is actively supervising the operation.
Level 3 – Conditional Driving Automation – sustained, specific performance with the expectation that the user is ready to respond to requests to intervene as well as to performance failures
Level 4 – High Driving Automation – Sustained, specific performance without the expectation that the user will respond to a request to intervene
Level 5 – Full Driving Automation – Sustained an unconditional performance without any expectation of a user responding to a request for intervention
Autonomous vehicles on off-road applications add another layer of complexity for route planning, steeper slopes, hazard recognition, etc. [Elhannouny 2019]. The levels of autonomy for land mobile autonomous systems in this the taxonomy are sequenced as decreasing levels of human supervision.
Levels of Sea Mobile Device Autonomy
Maritime autonomous systems pose many additional challenges to their designers as the environment is more hostile and the device may be more remote than land mobile autonomous systems etc. Without a well-defined safety standards and efficient risk allocation, it is hard to assign liability and define insurance or compensation schemes leaving unmanned surface ships potentially subject to a variety of different liability regimes [Ferriera 2018]. Unmanned maritime systems have been in operation for more than 20 years [Lantz 2016] and more frequent interactions between manned and unmanned systems are expected in future. The International maritime Organization [IMO 2019] recently initiated a regulatory scoping review of the conventions applicable to Maritime Autonomous Surface Ships. The Norwegian Forum for Autonomous Ships [NFAS 2017] proposed a four-level classification of autonomy:
Decision Support – Crew continuously in command of vessel
Automatic – pre-programmed sequence requesting human intervention when unexpected events occur or when sequence completes
Constrained Autonomous – fully automatic in most situations but human operators continuously available when requested by the system
Fully Autonomous – no human crew or remote operators.
As with land-based mobility autonomy, the levels of this taxonomy for sea mobile autonomous systems are defined and sequenced as decreasing levels of actions by human operators.
Levels of Air Mobile Device Autonomy
The air environment adds an additional degree of freedom (vertical) for navigation, and introduces different navigation hazards (e.g. birds, weather) compared to land or sea environments. Human operated air mobile devices have in the past been relatively large, expensive and heavily regulated. Smaller lower cost drones have enabled more consumer applications. In the US, the FAA has jurisdiction over drones for non-recreational use [FAA 2016], though an increasing number of states [NACDL 2019] are also promulgating regulations in this area. Potential liabilities include not just damages to the device, but also liabilities from collisions with fixed infrastructure, personal injuries, trespass, etc.
[Clough 2002] proposed a 10-level classification of autonomous control levels:
Level 0 – remotely piloted vehicle
Level 1 – execute preplanned mission
Level 2 – Pre-loaded alternative plans
Level 3 – Limited Response to Real Time Faults/ Events
More recently, [Huang 2005] proposed autonomy levels for unmanned systems along three dimensions: Mission complexity, Human Independence and Environmental difficulty. The levels of this autonomous air mobility taxonomy are defined and sequenced as increasing complexity of the scenario and its implied computational task.
Consolidated views of mobility autonomy levels
[Vagia 2019] reviewed levels of automation across different transportation modes and proposed a common faceted taxonomy based on facets of human-automation interface, environmental complexity, system complexity and societal acceptance. [Mostafa 2019] points out that the level of autonomy may be dynamically adjustable, given some situational awareness. In their literature review of levels of automation, [Vagia 2016] noted that some authors refered to “autonomy” and “automation” interchangeably, and proposed an 8-level taxonomy for levels of automation:
Level 1 – Manual control
Level 2 – decision Proposal
Level 3 – human Selection of decision
Level 4 – computer selects decision with human approval
Level 5 – computer executes selected decision and informs the human
Level 6 – computer executes selected decision and informs the human only if asked
Level 7 – computer executes selected decision and informs the human only if it decides to
Level 8 – autonomous control informs operators only if out of specification error conditions occur.
This consolidated view of an autonomous mobility taxonomy is defined and structured as in terms of the decision role taken by the computer system.
Taxonomies provide a means to classify instances, but the intended uses of the categorization limit the value it provides. For example, the [SAE 2018] classification may be useful in setting consumer expectations, and the developers of the other taxonomies proposed no doubt had other purposes in mind during their creation. One consideration is whether the proposed categorizations are useful in terms of the intended benefits from implementing some form of autonomy. [Araguz 2019] identified the need for (or benefits from) autonomy from the literature as improving performance in systems with high uncertainties; ameliorating responsiveness, flexibility and adaptability in dynamic contexts; and ultimately handling complexity (especially in large-scale systems), and in the aerospace field as a means to improve reliability and tolerance to failures. Defining a level of autonomy taxonomy based on the amount of operator control required, or the complexity of the scenario/ computational tasks arguably speaks toward those dimensions of intended benefits.
Presenting levels of autonomy as a numbered sequence of levels implies not just a classification taxonomy, but some quantification of an autonomy continuum, rather than a set of discrete and non-contiguous classification buckets. Most of these taxonomy proposals also leave full autonomy as a speculative category awaiting future technology advancements (e.g., in Artificial Intelligence) and do not provide existence proofs that meet the category definition. While to some autonomy may imply some degree of implementation complexity, the converse is not always true; increasing levels of complexity do not necessarily lead to autonomy. Constructing an autonomy taxonomy based on the decision roles taken by the computing systems is more interesting because it focusses on the actions of the system itself rather than external factors (e.g., scenario complexity, other actors). The decision roles themselves, however, are discrete categories, rather than a linear scale; and are independent of the importance of the decisions being made.
[Myhre 2019] reviewed autonomy classifications for land and maritime environments and proposed that a system is considered autonomous if it can legally accept accountability for an operation thereby assuming the accountability that was previously held by either a human operator or another autonomous system; thus classifying systems as autonomous or not. With non-autonomous systems, liability generally lies with the operator; with autonomous systems, liability generally shifts back to the designer. The scope of accountability or liability in the event of some loss is often described by a monetary value. Insurance contracts are often used to protect against potential liabilities up to some specified monetary level. Monetary values have the advantage of providing a simple linear scale. A system that can accept a $1M liability is thus arguably more autonomous than one that could only accept a $1,000 liability.
References
[Araguz 2019] C. Araguz, et al. “A Design-Oriented Characterization Framework for Decentralized, Distributed, Autonomous Systems: The Nano-Satellite Swarm Case.” International Symposium on Circuits and Systems (ISCAS). IEEE, 2019.
[Clough 2002] B. Clough, “Metrics, schmetrics! How the heck do you determine a UAV’s autonomy anyway.” AIR FORCE RESEARCH LAB WRIGHT-PATTERSON AFB OH, 2002.
[Elhannouny 2019] E. Elhannouny, & D. Longman. Off-Road Vehicles Research Workshop: Summary Report. No. ANL-18/45. Argonne National Lab.(ANL), Argonne, IL (United States), 2019.
[FAA 2016] FAA “Operation and Certification of Small Unmanned Aircraft Systems”, 81 Fed. Reg. 42064, 42065-42066 (rule in effect 28 June 2016); 14 C.F.R. Part 107.
[Huang 2015] H. Huang, et al. “A framework for autonomy levels for unmanned systems (ALFUS).” Proceedings of the AUVSI’s Unmanned Systems North America (2005): 849-863.
[IMO 2019] IMO Legal Committee, 106thSession,27-29 March 2019.
[SAE 2018] SAE, “Surface Vehicle Recommended Practice: Taxonomy and Definitions for Terms related to riving Automation Systems for On-Road Motor Vehicles” (Revised ) June 2018
[Taeihagh 2019] A. Taeihagh, & H. Lim. “Governing autonomous vehicles: emerging responses for safety, liability, privacy, cybersecurity, and industry risks.” Transport reviews 39.1 (2019): 103-128.
[Vagia 2016] M. Vagia, et.al., “A literature review on the levels of automation during the years. What are the different taxonomies that have been proposed?.” Applied ergonomics 53 (2016): 190-202.
[Vagia 2019] M. Vagia, & O. Rødseth. “A taxonomy for autonomous vehicles for different transportation modes.” Journal of Physics: Conference Series. Vol. 1357. No. 1. IOP Publishing, 2019.
A Decentralized Autonomous Organization (DAO), (Sometimes also referred to a Decentralized Autonomous Corporation (e.g. [Kypriotaki 2015])) is a type of smart contract executing as an autonomous organizational entity in the context of a blockchain. Note that the “smart” in smart contract does not require or imply the use of AI (though it is also not prohibited), it refers to automation in the design and execution of legally enforceable contracts. DAOs were originally envisioned as a pseudo-legal organization run by an assemblage of human and robot participants [Vitalik 2013a], [Vitalik 2013b], [Vitalik 2013c]. More recently [Wang 2019] proposed a definition of a DAO as a blockchain-powered organization that can run on its own without any central authority or management hierarchy, and a five-layer architecture reference model. The legal status of DAOs is continuing to evolve. In the absence of specific enabling legislation, There have been proposals that DAOs be considered a trust administrator [Jentzsch 2015] , or a form of (implied) partnership [Zetsche 2018], or be construed as falling within existing LLC enabling statues [Bayern 2014], or requiring new enabling legislation for Cryptocorporations [Nielsen 2018]. Meanwhile, some states (e.g., Vermont [Vermont 2018]) have created enabling legislation recognizing blockchain based LLCs.
DAOs have been proposed in a number of different applications. [Zichichi 2019] proposed a DAO for crowdfunding applications. [Mylrea 2019] proposed a blockchain entity similar to a DAO in the context of energy markets. [Miscione 2019] reviewed blockchain as an organizational technology in the context of both markets and public registries. [Calcaterra 2019] proposed a DAO for underwriting insurance. [Diallo 2018] proposed DAO applications in e-government. DAOs are built on top of blockchain smart contracts. The scope of smart contracts can include off-chain computations, and a wide variety of token types and cyberphysical devices (see e.g., [Wright 2019]). Smart contracts can be used in the execution of a broad array of contracts.
An organization needs to be able to adapt to changes in its environment. Such adaptation in a software entity requires configuration control, mechanisms to clean up old versions etc. This is complicated in the case of blockchains where the software code may be stored as part of the “immutable” block. A key decision point in such adaptations lies in the selections of adaptation to support. In the context of decentralized systems, this is complicated because of multiple parties interacting in a decentralized fashion. This decision process in the evolution of a DAO is generally described in the literature as the governance mechanism for the DAO. Some blockchain architectures (e.g., Tezos) support governance mechanisms inherently, others would require the governance to be build into a higher layer dapp – in this case a DAO. The need for such governance became widely recognized in the aftermath of an early DAO implementation. [Jantzsch 2015] implemented[1] a DAO as smart contracts on Ethereum. After the reentrancy exploit in 2016 of a coding flaw resulted in considerable financial loss, the flaw was resolved through a hard fork [Mehar 2019]. [Avital 2019] identified three classes of governance mechanisms: control, coordination, and realignment, and compared governance practices in centralized and decentralized organizations; arguing that interpretations of governance mechanisms in near-autonomous or stigmergic (indirectly coordinated) forms of a decentralized organization require a theoretical taxonomy emphasizing process over structure. [Notra 2015] proposed a governance as a service model for collaborating DAOs. Governance mechanisms are an ongoing challenge for blockchains, not just DAOs, but DAOs as a legal entity require additional consideration of who is legally authorized to make such upgrades.
The notion of a DAO as a new paradigm for organizational
design has sparked the imagination of some commentators (e.g. [Hsieh 2018],
[Wooley 2016], [Koletsi 2019]), but the underlying performance and capacity of
many current blockchain protocols remain technical impediments to that vision. There
have been some performance improvements; for example, [Barinov 2019] introduced
a Proof of Stake DAO improving performance of consensus mechanisms over prior
Proof of Work blockchains. There are a variety of platforms [Clincy 2019] for implementing
blockchains, smart contracts, and thus DAOs. Some claim [Hsieh 2018] that
bitcoin itself is a form of DAO. DAOs
could be implemented in permissionless blockchains or permissioned blockchains
[Wang 2019]. The most (in)famous example was implemented on Ethereum [Dupont
2017]. [Alimoglu 2017] proposed an organization design for DAOs featuring a
crypto currency funding mechanism, a decision mechanism based on voting and released[2] an
Ethereum/ Solidity implementation under a GNU GPL v3.0 license. DigixDAO[3] aims
to be a self-organizing community on the Ethereum blockchain that actively
involves its token holders in decision making and shaping the direction of the
asset tokenization business. Members of the Ethereum community have pooled
funds together to distribute grants effectively through a DAO structure (MolochDAO[4]
[Soleimani 2019]). OpenLaw[5] provides
a support mechanism for formation of various legal entity types for DAOs. DAOstack[6] is
building an open source software stack for implementing DAOs. DatabrokerDAO [7]appears
to be launching a DAO based data service. MakerDAO supports transactions in
collateralized debt positions.
While some (e.g., [Lopucki 2018]) have questioned the wisdom of aligning software entities with legal entities, DAOs seem to be in various stages of implementation and operation with some color of law enabling legal recognition. Some (e.g., [Field 2020]) are predicting significant commercial DAO progress in 2020. While example DAOs have been in operation for more that 5 years now, consensus on the required functionality for DAOs is still emerging. While the decentralized nature of DAOs is perhaps unfamiliar to some, it, and the notion of a virtual entity like a corporation are widely understood. The notion of autonomy in this context could use some further elaboration, and perhaps even categorization or quantification. DAOs by definition include some notion of autonomy, but what is required for a smart contract to rise to the level of a DAO is not exactly clear. Autonomy implies some aspect of free-will and self-determination, but it is not clear that truly independent decision making is expected in the DAO applications so far. The objectives for DAO applications above seem to be in the nature of an entity that can be trusted to perform in some expected manner. The smart contracts underlying a DAO employ logical programming to execute their contractual objectives; that logical basis provides an explanation of behavior even when unexpected (e.g. due to some reentrancy flaw); and behavior explanations are often unavailable in artificial intelligence applications (e.g., those based on machine learning). Legal recognition as an entity implies not only a capacity for independent action, but also some duties (e.g. duties to responding to other legal processes, obligations toward others (e.g., shareholders, employees)). Other virtual entities (e.g., corporations) rely on designated humans to perform these functions. The argument for implementing a DAO is to minimize the human role in favor of automation. Corporations also provide a feature of limited liability. Many of the DAO applications are manipulating digital assets of considerable value and so considerations of liability if those valuable digital assets were to be lost, damaged or degraded. Where the DAO extends smart contract to cyber-physical assets or other tokenized assets the potential liabilities may become important considerations. So how much autonomy is then required for a DAO to be a DAO?
References
[Alimoglu 2017] A.
Alimoğlu, & C. Özturan. “Design of a Smart Contract Based Autonomous
Organization for Sustainable Software.” 13th Int’l Conf. on
e-Science. IEEE, 2017.
[Calcaterra 2019]
C. Calcaterra, et.al., “Decentralized Underwriting.” Available
at SSRN 3396542(2019).
[Clincy 2019] V. Clincy,
& H. Shahriar. “Blockchain Development Platform Comparison.” 43rd
Ann. Computer Software and Applications Conference (COMPSAC). Vol. 1. IEEE,
2019.
[Diallo 2018] N. Diallo,
et al. “eGov-DAO: A better government using blockchain based decentralized
autonomous organization.” 2018 International Conference on
eDemocracy & eGovernment (ICEDEG). IEEE, 2018.
[DuPont 2017] Q. DuPont,
“Experiments in algorithmic governance: A history and ethnography of “The
DAO,” a failed decentralized autonomous organization.” Bitcoin and
Beyond (Open Access). Routledge, 2017. 157-177.
[Hsieh 2018] Y. Hsieh,
et. al. ,”Bitcoin and the rise of decentralized autonomous organizations.” J.
of Organization Design7.1 (2018): 14.
[Jentzsch 2015] C.
Jentzsch, “Decentralized autonomous organization to manage a
trust.” White Paper (2015)
[Kypriotaki 2015]
K. Kypriotaki, et.al., “From bitcoin to decentralized autonomous
corporations.” International conference on enterprise information
systems. 2015.
[Koletsi 2019] M.
Koletsi, “Radical technologies: Blockchain as an organizational
movement.” Homo Virtualis 2.1 (2019): 25-33.
[Lopucki 2018] L.
Lopucki, “Algorithmic Entities”, Washington U. Law Rev., vol 95, no 4, pp
887-953, 2018
[Mehar 2019] M. Mehar,
et al. “Understanding a revolutionary and flawed grand experiment in
blockchain: the DAO attack.” J. of Cases on Inf. Tech. (JCIT) 21.1
(2019): 19-32.
[Miscione 2019]
G. Miscione, “Blockchain as organizational technology.” University
of Zurich, Department of Informatics IFI Colloquium, Zurich, Switzerland, 28th
February 2019. University College Dublin, 2019.
[Mylrea 2019] M. Mylrea,
“Distributed Autonomous Energy Organizations: Next-Generation Blockchain
Applications for Energy Infrastructure.” Artificial Intelligence
for the Internet of Everything. Academic Press, 2019. 217-239.
[Nielsen 2018] T.
Nielsen, “Cryptocorporations: A Proposal for Legitimizing Decentralized
Autonomous Organizations.” Utah Law Review, Forthcoming (2018).
[Norta 2015] A. Norta,
et. al., “Conflict-resolution lifecycles for governed decentralized
autonomous organization collaboration.” Proc. 2nd International
Conference on Electronic Governance and Open Society: Challenges in Eurasia.
ACM, 2015.
[Wang 2019] S. Wang,
et al. “Decentralized Autonomous Organizations: Concept, Model, and
Applications.” IEEE Transactions on Computational Social Systems 6.5
(2019): 870-878.
[Woolley 2016] S.Woolley,&
P.Howard. “Automation, algorithms, and politics| political communication,
computational propaganda, and autonomous agents Introduction.” Int’l
J. of Communication 10 (2016): 9.
[Wright 2019] S.
Wright, “Privacy in IoT Blockchains: with Big Data comes Big Responsibility”, Int’l
Workshop on IoT Big Data and Blockchain (IoTBB’2019) in conjunction with IEEE
Big Data (2019)
[Zetsche 2018] D.Zetsche,
et.al., “The distributed Liability of distributed ledgers” U. Ill. L.Rev. 2018.
[Zichichi 2019]
M. Zichichi, et al. “LikeStarter: a Smart-contract based Social DAO for
Crowdfunding.” arXiv preprint arXiv:1905.05560 (2019).