On the Pervasiveness of AI in the Law

There are a few examples where Artificial Intelligence (AI) is the subject of legal pronouncements of various types. A significant legal distinction exists between treating AI software as a “thing” (e.g., as property) and treating AI software as a legal entity.  Science fiction, and some product marketing literature, provides a vision of AI as an intelligent decision-making software entity. The reality is that today’s AI systems are decidedly not intelligent thinking machines in any meaningful sense. These AI systems operate largely through heuristics—by detecting patterns in data and using knowledge, rules, and information that have been specifically encoded by people. It is important, however, to emphasize how dependent such machine learning is upon the availability and integrity of data. Humans have developed philosophies for ethical human interactions over thousands of years of recorded history. Formulation of the appropriate ethical considerations for use with, of, and by AI entities is a much more recent development. This motivates the need for assessment of the scope of AI entity recognition in the legal field, and the assessment of the ethical risks this may pose.

In a more widespread category of applications, AI is being used in the implementation processes of the law. The rule of law provides an unstated context for day-to-day activities of ordinary citizens.  Laws remain applicable, even when not at the forefront of citizens attention. In modern society, everyone is governed by the law, and uses the tools of the law (e.g., contracts) to conduct their personal and business activities.  The law is an accretion of hundreds of years human experience distilled through formalized mechanisms for the adoption or adaption of laws subject to human supervision, explanation, and influence. Whether in public law (e.g., criminal law) or private law (e.g., contracts), the legal system operates through human agents. For a variety of reasons, the human agents of the legal system are increasingly adopting AI technologies. Beyond the extremes of speculative science fiction, information about the scope of adoption of AI technologies in the law rarely reaches mainstream audiences. Adoption of AI within the various roles and processes of the legal system proceeds on an incremental basis.  Ordinary citizens are not typically engaged in such decisions; nor notified of the adoption or use of AI systems.

The rise of the Machine Learning (ML) flavor of AI has been fueled by a massive increase in the creation and availability of data on the Internet. Institutions, organizations and societal processes increasingly operate using computers with stored, networked data related in some way to ordinary citizens. The law is one of those domains where, except in particular niches, high-quality, machine-processable data is currently comparatively scarce- with most legal records being unstructured natural language text. The data about ordinary citizens is increasingly being captured, stored and analyzed as big data. This analysis is often performed by ML systems detecting patterns in the data and applying those detection decisions various ways. Data about ordinary citizens is often acquired for one purpose, perhaps even with the user’s consent, or under color of law; but once captured may be subject to other secondary uses and analyses. Ordinary citizens have limited, or no, control over the ways in which they are represented in such data. While the general public may not have much awareness of AI software capabilities, there is evidence of increasing public awareness and concern regarding the large-scale data collection practices which enable AI software [1].

Consider the private applications of AI in law. For more than 20 years, companies have been able to use rules-based AI capturing legal constraints and business policies to help them comply with the law, while meeting their business objectives. More recently, computable contracts (also known as smart contracts) have been developed, particularly in the context of blockchain technologies. These enable automation of legally binding operations through software. These “smart contracts” do not require traditional ML or rules-based, expert system AI functions, but do provide for autonomous execution of legally binding actions. Consumer use of AI systems for legal purposes is also increasing. Most taxpayers would be familiar with rules-based software for tax return preparation and these could be classified as expert systems. There are also simple expert systems—often in the form of chatbots—that provide ordinary citizens with answers to basic legal questions [2]. AI implementations as the tools of private law in everyday use (e.g. contract automation) are becoming more widespread.

Governmental officials of various kinds are increasingly using AI systems to make substantive legal or policy decisions. Often, government agencies have programmed systems that contain a series of rules about when applicants for benefits should be approved and when they should not. These systems often contain automated computer assessments that either entirely prescribe the outcome of the decision, or, at the very least, influence it. Judges are increasingly using software systems that employ AI to provide a score that attempts to quantify a defendant’s risk of reoffending. Although the judge is not bound by these automated risk assessment scores, they are often influential in the judge’s decisions. The census, tax records and a host of other reporting obligations create a flood of citizen created data towards various governmental agencies. Machine generated and collected data concerning citizens (and the national environment), is also routinely being collected and processed by governmental entities. ML technology is used in predictive policing to detect patterns from past crime data in an attempt to predict the location and time of future crime attempts. Governmental databases that contain photos of those who have previously come into contact with the government or law enforcement, provide a data source for facial recognition software.

The vision of AI software as an entity raises the question of whether the law should recognize such an entity as a legal person. While this is a subject of discussion in many countries, individual examples of AI systems have gained some form of legal recognition. As examples: in 2014, Vital, a robot developed by Aging Analytics UK was appointed as a board member of the Hong Kong venture capital firm Deep Knowledge; In 2016, a Finnish company (Tieto) appointed Alicia T, an expert system, as a member of the management team with the capacity to vote; and, in 2017, “Sophia” (a social humanoid robot developed by Hong Kong-based company Hanson Robotics, in collaboration with Google’s parent company Alphabet and SingularityNET) has reportedly received citizenship in Saudi Arabia, and was named the first Innovation Champion of the United Nations Development Program [3]. Non-human legal entities (e.g., corporations) have been previously recognized by the law in most jurisdictions, but these are typically governed by human boards of directors. While humans have had experience with corporations for over a hundred years, legally recognizable AI entities are a rather newer concept, and the norms of ethical behavior for interaction with such entities have yet to be established.

The exponential growth in data over the past decade has impacted the legal industry; both requiring automated solutions for the cost effective and efficient management of the volume and variety of big (legal) data; and, enabling AI techniques based on machine learning for the analysis of that data. Legal innovations enabling the recognition of software as legal entities (e.g., BBLLCs, DAOs) are starting to emerge. The author William Gibson [4], noted that the future is already here, but it’s just not widely distributed. Deployments of AI systems in both public and private law applications are proceeding, niche by niche, as the economics warrant and the effectiveness is demonstrated. Intelligent, or otherwise, robo-advisors, and software entities as counterparties, have already emerged in financial applications. Scaling such AI systems from isolated niches to integrated solutions may be an entrepreneurially attractive value proposition. Typical diffusion curves for technology initially scale rapidly and then slow down. Because the legal system affects everyone, rapidly scaling the pervasiveness of AI in the law, seems a disquieting prospect. AI systems seem to thrive on data about us humans. How much data / visibility do citizens have into the pervasiveness of AI systems deployments?

An extended treatment of this topic is available in a paper presented at the IEEE 4th International Workshop on Applications of Artificial Intelligence in the legal Industry (part of the IEEE Big Data Conference 2020).

Reference

[1] Wright, S. A. (2019, December). Privacy in iot blockchains: with big data comes big responsibility. In 2019 IEEE International Conference on Big Data (Big Data) (pp. 5282-5291). IEEE.


[2] Morgan, J., Paiement, A., Seisenberger, M., Williams, J., & Wyner, A. (2018, December). A Chatbot Framework for the Children’s Legal Centre. In The 31st international conference on Legal Knowledge and Information Systems (JURIX).


[3] Pagallo, U. (2018). Vital, Sophia, and Co.—The quest for the legal personhood of robots. Information, 9(9), 230.

[4]Gibson, W, (1993) https://en.wikiquote.org/wiki/William_Gibson

Presenting my paper on:  “ClickThru Terms of Service: Blockchain smart contracts to improve consumer engagement?”

Date: November 12, 2020
Time: 11:30PM EST
Appearance: International Symposium on Technology and Society
Outlet: IEEE Conference
Location: Phoenix (virtual)
Format: Other

Did the expected NFV OPEX savings materialize?

When the operators issued their whitepaper[1] challenging the industry to address network function virtualization in 2012, the expected benefits included a number of improvements in operating expenses. These included improved operational efficiency by taking advantage of the higher uniformity of the infrastructure via:

  • IT orchestration mechanisms providing automated installation, scaling-up and scaling out of capacity, and re-use of Virtual Machine (VM) builds.
  • More uniform staff skill base: The skills base across the industry for operating standard high volume IT servers is much larger and less fragmented than for today’s telecom-specific network equipment.
  • Reduction in variety of equipment for planning & provisioning. Assuming tools are developed for automation and to deal with the increased software complexity of virtualisation.
  • Mitigating failures by automated re-configuration and moving network workloads onto spare capacity using IT orchestration mechanisms, hence reducing the cost of 24/7 operations.
  • More efficiency between IT and Network Operations- shared cloud infrastructure leading to shared operations
  • Support in-service software upgrade (ISSU) with easy reversion by installing the new version of a Virtualised Network Appliance (VNA) as a new Virtual Machine (VM).

While there have been a number of studies addressing the potential for capex improvements (see e.g., (Naudts, et.al. 2012)(Kar et.al. 2020)), there are relatively fewer studies in the literature concerning the opex improvements (see e.g., (Hernandez-Valencia et.al. 2015)(Bouras et.al. 2015)(Karakus & Durresi 2019)). At least partly, this reflects the commercial sensitivity of expense data at network operators. Headcount is a significant cost factor in operations. Opex improvements could imply headcount reductions which would also make the topic sensitive for network operator staff.

The transformative nature of NFV, transitioning equipment spend from custom hardware to software on generic computing infrastructure, generated significant interest and rhetoric at the time (see e.g., Li & Chen 2015)), but other new technology introductions have also claimed significant opex improvements (e.g., GMPLS (Pasqualini et.al. 2005)). Telecommunications operators are large-scale businesses, so opex reductions are an ongoing area of focus. The telecom industry is characterized by significant capital investments in infrastructure leading to significant debt loads. Average industry debt ratios have been in the range 0.69 to 0.81 over the past few years (readyratios.com), implying operating expenses would include a significant component for depreciation and amortization. Examining the annual reports of tier 1 carriers shows depreciation and amortization in the range of 15-20% of operating expenses. Telecom services are mass market services, implying significant sales expenses to reach the mass market. Examining the annual reports of tier 1 carriers shows sales, general and administrative costs are on the order of 25% of operating expenses. The operations efficiency improvements expected for NFV don’t impact Depreciation or SGA expenses, hence at most they can impact the remaining 55-60% of the company’s total operating expenses.

(Bouras et. al. 2016) expected opex reduction of 63% compared to their baseline, but it is not clear how that would relate to reportable operating expenses for the company. (Hernandez-Valencia, et.al. 2015) also provided numerical percentage ranges for expected savings in a number of areas, but the relation to reportable operating expenses for the company is similarly unclear. Other studies (Karakus & Durresi 2019) (Pasqualini et al 2005) identified factors affecting operating expenses but did not have consistent terminology or scope in the operating expense factors identified. Environmental operations costs of power and real estate were identified by (Hernandez-Valencia, et.al. 2015) and (Pasqualini et al 2005), but (Karakus & Durresi 2019) refer only to energy related costs. (Hernandez-Valencia, et.al. 2015) identified service operations costs of assurance and onboarding; (Karakus & Durresi 2019) identified service provisioning; and (Pasqualini et al 2005) referred to service management processes – SLA negotiations, service provisioning, service cessation, service move/change.

The lack of consistent operating cost models may be explained by variation across operators. Service definitions and designs may be different across operators. Environmental operations expenses like real estate and power could be affected significantly by operators’ preferences for private vs public cloud infrastructures. The design of operations reflects company’s strategic choices on what to capitalize as fixed infrastructure and may be influenced by other factors (e.g. tax policies, regulatory regimes). Numerical targets for opex reductions seem difficult to generalize across organization. Even within a single organization, tracking such targets at the corporate level may be significantly impacted by other corporate activities (e.g., M&A) that impact reportable metrics.  A better approach may be to focus on improvements in particular operational tasks that can be generalized across multiple operators and architectures.

References

Naudts, B., Kind, M., Westphal, F. J., Verbrugge, S., Colle, D., & Pickavet, M. (2012, October). Techno-economic analysis of software defined networking as architecture for the virtualization of a mobile network. In 2012 European workshop on software defined networking (pp. 67-72). IEEE.

Kar, B., Wu, E. H. K., & Lin, Y. D. (2020). Communication and Computing Cost Optimization of Meshed Hierarchical NFV Datacenters. IEEE Access8, 94795-94809.

Hernandez-Valencia, E., Izzo, S., & Polonsky, B. (2015). How will NFV/SDN transform service provider opex? IEEE Network29(3), 60-67.

Bouras, C., Ntarzanos, P., & Papazois, A. (2016, October). Cost modeling for SDN/NFV based mobile 5G networks. In 2016 8th international congress on ultra-modern telecommunications and control systems and workshops (ICUMT) (pp. 56-61). IEEE.

Karakus, M., & Durresi, A. (2019). An economic framework for analysis of network architectures: SDN and MPLS cases. Journal of Network and Computer Applications136, 132-146.

Li, Y., & Chen, M. (2015). Software-defined network function virtualization: A survey. IEEE Access3, 2542-2553.

Pasqualini, S., Kirstadter, A., Iselt, A., Chahine, R., Verbrugge, S., Colle, D., … & Demeester, P. (2005). Influence of GMPLS on network providers’ operational expenditures: a quantitative study. IEEE Communications Magazine43(7), 28-38.


[1] https://portal.etsi.org/NFV/NFV_White_Paper.pdf

Towards Measuring Smart Contract Automation

Blockchains are an interesting new technology emerging into large scale commercial deployments for a variety of different applications. While cryptocurrencies were the initial application, the development of smart contracts has enabled a broader variety of transactions on blockchains. Financial transactions using blockchain smart contracts have become a significant element of broader transformations in the financial service industry. “FINTECH” refers generally to the broader transformation of financial services by technology solutions. “DeFi” refers to a more specific, though perhaps less widely supported transformation towards Decentralized Financial services using permissionless (or public) blockchains.

Smart contracts automate transactions of cryptocurrencies and other tokenized digital assets between account holders. Smart contracts execute on the blockchain; and use the block to maintain transaction state information. Some smart contracts may also use oracles to interact with cyber-physical resources, off chain computing resources or other information sources. Not every smart contract is required to have legal significance, but generally this is required for financial transactions above a certain size so that legal recognition and enforcement of the financial transaction can be available.

The scope of a legal contract is a fundamental factor in any legal contractual dispute. Generally, disputes over contract scope center around whether the contract is completely contained in a single document, or whether there are additional contractual terms captured elsewhere. An analogous problem exists in the context of smart contracts as to the scope of the agreement. The academic literature has recognized a continuum of solutions between two extremes (a) the code is the contract vs (b) the code is an implementation of a separate legal document. In practice, not all the terms and clauses of a typical legal contract are executable by a smart contract, hence intermediate solutions are desirable. Intermediate solutions include (i) the annotation of code with legal terms that are not executable by the smart contract, or (ii) the annotation of traditional contractual language to identify terms that might be computable by a smart contract. It may be easier to think of these intermediate solutions as targeting different types of users. Type (i) smart contracts might be of particular interested to software developers operating in a relatively fixed legal environment. Type (ii) smart contracts might be considered of particular relevance to lawyers and other non- software developers that are interested in focusing on the terms and clauses without being so concerned about the software implementation mechanics. Templated legal contracts have previously been used for contract automation, and this approach also applies for type (ii) smart contracts.

Given the dissonance in practical implementations between the scope of the legal contract and the corresponding executable smart contracts, it becomes interesting to consider how to measure the gap between these entities. Clack (2018) considered comparing the semantic differences, but legal prose and computer source code are recorded with vastly different levels of precision, making this approach difficult.  The ISDA (2018) whitepaper, in considering the automation of derivates contracts via smart contracts, distinguishes between the contract terms that should or could be automated within the overall scope of terms in derivative contracts. Templated contracts provide a mechanism to identify the specific elements of the contract which are computationally significant.  Some comparison of the quantity of computable terms within a contract compared to the size of the overall contract may therefore provide a useful perspective on the degree of automation of the contract.

There are a number of tools and methods for capturing templated contracts. Similarly, a variety of languages exist for encoding the corresponding smart contract.  Several of these are even available, to varying degrees, in open source, thus enabling easier access for study. Project Accord[1], hosted by the Linux Foundation, is one such open source project providing tools and templates for smart legal contracts. In particular, this project provides[2] (as of 9/1/2020) a repository with 51 examples of contract text with corresponding data structures for the data fields that are computable within the smart contracts.  The figure below provides plots of various measures of the size of the legal clause or contract (# words, # sentences, # paragraphs, # clauses) vs counts of the number of data fields that were templated with a linear trendline for reference. These plots show an increasing trend of number of templated terms with the size of the contract.  But the number of data fields templated as rather low in comparison to the size of the contracts with (on average) <5 per legal clause, <2.5 per paragraph, < 1.5 per sentence, and overall <10% of words were templated as data fields.

This corpus may be rather small; a larger corpus may provide for more statistical rigor. This corpus is also intended as exemplary; it is an aid to illustrate the operation, and feasibility, of the smart contract functionality provided by accord. In that sense, it may not be representative of commercial smart contracts in operation on various blockchains.

If we consider the templated data fields as being the terms that should be automated in ISDA’s parlance, having only 10% of the words in this category would seem to indicate that a relatively low degree of automation at present. This may be indicative of the current state of the technology, where the easier use cases are automated first.  It would be interesting to better understand the limits of the contract terms that could be automated via smart contracts. Some grammatical constructs of natural language (e.g., “the”, “of” may have no 1:1 semantic equivalent in computation. Some typical legal clauses (e.g., choice of law, choice of venue) may require action by parties off the blockchain that are difficult to automate in a smart contract. Hence, the limits of how many words in a legal contract may have computational significance in a smart contract may be less than 100%.   

References

Clack, C. D. (2018). Smart Contract Templates: legal semantics and code validation. Journal of Digital Banking2(4), 338-352.

ISDA, (2018) White paper: Smart Derivatives contracts: from Concept to Construction.


[1] https://accordproject.org/

[2] https://templates.accordproject.org/

Asset Administration by Smart Contracts

Buterin’s white paper [Buterin 2014] described smart contracts as “systems which automatically move digital assets according to arbitrary pre-specified rules”.  [ISO 2019] defined an “asset” anything that has value to a stakeholder and a “digital asset” as one that that exists only in digital form or which is the digital representation of another asset. Similarly, a “Token” digital asset as representing a collection of entitlements. The tokenization of assets refers to the process of issuing a blockchain token (specifically, a security token) that digitally represents a real tradable asset—in many ways similar to the traditional process of securitization [Deloitte 2018]. A security token could thus represent an ownership share in a company, real estate, or other financial asset. These security tokens can then be traded on a secondary market. A security token is also capable of having the token-holder’s rights embedded directly onto the token, and immutably recorded on blockchain. Asset administration using smart contracts is emerging as a viable mechanism for digitized or tokenized assets though smart contracts may also have other purposes.

Recall that smart contracts started as an enhancement providing a programmable virtual machine in the context of blockchains, and the initial applications of blockchains were cryptocurrencies. Cryptocurrencies have been recognized as commodities for the purpose of regulations on derivatives like options and futures. High-value smart contracts on cryptocurrency derivatives require substantial legal protection and often utilize standardized legal documentation provided by the International Swaps and Derivatives Association (ISDA). Smart contracts managing cryptocurrency derivatives aim to automate many aspects of the provisions of the ISDA legal documentation [Clack 2019]. There have been a number of efforts to extend blockchains and smart contracts beyond cryptocurrency applications to manage other types of assets. Initially these were custom dApps, but as interest in smart contracts for specific types of assets grew, then a corresponding interest developed in having common token representations for particular types of asset, enabling broader interoperability, and reducing custom legal and development risks and costs. Rather than having specialized blockchains for supply chain provenance and other for smart derivatives contracts, different tokens representing those asset classes can be managed by a smart contract independently of the underlying blockchain technology.

Not all tokens are intrinsically valuable, many derive their value by reference from some underlying asset. [Bartoletti 2017] classifies smart contracts by application domain as financial, notary (leveraging the immutability of the blockchain to memorialize some data), games (of skill or chance), wallet (managing accounts, sometimes with multiple owners), library (for general purpose operations e.g. math and string transformations) and unclassified (The financial and notary categories had the most contracts.). The notary smart contracts enabled the smart contracts to manage non cryptocurrency assets. [Alharby 2018] classified smart contract literature using a keyword technique into: security, privacy, software engineering, application (e.g. IoT), performance & scalability and other smart contract related topics.  The application domains were identified as: Internet of Thing (IoT), cloud computing, financial, data (e.g., data sharing, data management, data indexing, data integrity check, data provenance), healthcare, access control & authentication and other applications. [Rouhani 2019] categorized decentralized applications in seven main groups include healthcare, IoT, identity management, record keeping, supply chain, BPM, and voting. However, the blockchain-based applications are not limited to these groups. Keywords and application identification, provide a view of the breadth of applications, but these are not exclusive or finite categories – new applications or keywords  can always be developed extending those lists. Token based music platforms have been proposed [Mouloud 2019]. Networked digital sharing economy services enable the effective and efficient sharing of vehicles, housing, and everyday objects utilizes a blockchain ledger and smart contracting technologies to improve peer trust and limit the number of required intermediaries, respectively [Fedosov 2018]. The tokenization of sustainable infrastructure can address some of the fundamental challenges in the financing of the asset class, such as lack of liquidity, high transactions costs and limited transparency [Uzoski 2019]. Tokens can also be useful from a privacy perspective. Tokenization, the process of converting a piece of data into a random string of characters known as a token, can be used to protect sensitive data by substituting non-sensitive data. The token serves merely as a reference to the original data; but does not determine those original data values. The advantage of tokens, from a privacy perspective, is that there is no mathematical relationship to the real data that they represent. The real data values cannot be obtained through reversal [Morrow 2019]. If the costs of tokenizing and marketing new asset classes are lower than costs of traditional securities offerings, then this can enable securitization of new asset classes. The ability to subdivide the some types of tokens may enable wider markets through reduced minimum bid sizes.

In 2004, early proposals were made for XML data type definitions to capture electronic contracts [Krishna 2004]. In 2015, the Ethereum developer community adopted ERC-20 (which specifies a common interface for fungible tokens that are divisible and not distinguishable) to ensure interoperability [Vogelsteller 2015]. While the initial token applications may have been for cryptocurrencies, blockchains (especially ethereum) are being applied in a lot of other domains, and so assets administered by the smart contracts are being stretched beyond their original purpose to enable new applications. Studies of trading patterns would need to distinguish whether those tokens were all being used to represent the same kind of asset to be able to make valid inferences about a particular market for that asset. Stretching beyond the fungible cryptocurrencies to enable popular new blockchain applications like tracking supply chain provenance requires a a different kind to token, a non-fungible token. Non-fungible tokens (NFTs) are a new type of unique and indivisible blockchain-based tokens introduced in late 2017 [Regner 2019]. The Ethereum community in 2018 adopted ERC-721 which extends the common interface for tokens by additional functions to ensure that tokens based on it are distinctly non-fungible and thus unique. [Entriken 2018].

In 2018, [FINMA 2018] identified 3 classes of tokens – payment tokens, asset tokens and utility tokens. A utility token as intended to provide access digitally to an application or service, by means of a blockchain-based infrastructure. This may include the ability to exchange the token for the service.

Token functions – payment {Utility, asset, yield}
Token features – stake rewards sole medium of exchange
Token distribution – Initial drops and reservations for miners and service providers

In 2019, [Hong 2019] proposed a non-fungible token structure for use in hyperledger, and [Cai 2019] proposed universal token structure for use in token based blockchain technology. Also in 2019,  an industry group, the Token Taxonomy Initiative, proposed a Token Taxonomy Framework [TTI 2019] in an effort to model existing and define new business models based on it. TTI defines a token as a representation of some shared value that is either intrinsically digital or a digital receipt or title for some material item or property and distinguishes that from a wallet which is a repository of tokens attributed to an owner in one or more accounts. TTI classifies tokens based on five characteristics they possess: token type (fungible or not), token unit (fractional, whole, singleton), value type (intrinsic or reference), representation type (common or unique), and template type (Single or hybrid). The base token types are augmented with behaviors and properties captured in a syntax (the token formula). Particular token formulae would be suitable for different business model applications – e.g. loyalty tokens, supply chain SKU tokens, securitized real property tokens, etc.   This Token Taxonomy Framework subsumes the functions, features and distribution aspects of the FINMA token model, enabling those regulatory perspectives as well as other properties of particular value in enabling different token-based business models.

The immutable, public Ethereum blockchain enables study of the trading patterns in ERC-20 tokens, revealing trading networks that display strong power-law properties (coinciding with current network theory expectations) [Soumin 2018]. Even though the entire network of token transfers has been claimed to follow a power-law in its degree distribution, many individual token networks do not: they are frequently dominated by a single hub and spoke pattern. When considering initial token recipients and path distances to exchanges, a large part of the activity is directed towards these central instances, but many owners never transfer their tokens at all [Victor 2019]. There is strong evidence of a positive relationship between the price of ether and price of the blockchain tokens. Token function does impact token price, over a time period that spans both boom and bust. The designed connection can be effective; linking a project that has a value, with a token that has a price, specifically in the absence of a legal connection or claim [Lo 2019]. From these preliminary studies, tokens seem to exhibit some of the trading properties of regular securities. Many of these initial tokens have no or untested legal connections to the underlying assets. While consistent behavior in boom and bust is important for an investment, from a legal perspective, the predictability of outcomes for asset recovery during more stressful events (e.g. bankruptcy) may be more important.

A point of concern is understanding how tokens representing value will remain linked to the real asset that they represent. For example, imagine if you own tokens representing a small fraction of a set of gold coins at a bank, and some coins are stolen. Or the reverse – who owns the gold coins if the account keys are lost or the token destroyed? Being able to rationally predict what happens to your token and to the other token owners is crucially important, since the value of tokens becomes greatly undermined if they cannot be proven to be linked to real-world assets [Deloitte 2018]. In these types of cases, off chain enforcement action is required. A typical legal tool for representing such interests in real assets would be recording security interests and liens in real property under the UCC.  One approach would be to update the lien recordings for the new owner after each transaction. There are at least two difficulties with this approach. First, the smart contract of today may not be able to interact with manual off-chain legal recordation processes for security interests. Secondly, if the purpose of tokenizing the asset was to increase liquidity, frequent transactions may result in a high volume of updates overloading off-chain manual recordation processes. Another approach would be to use some centralized custody agent (similar to physical custody) and have them hold the lien in the public records as a trustee (perhaps keeping account records reflecting updates from transactions on a blockchain). If the smart contract was a legal entity (e.g., a Vermont style BBLLC), then the BBLLC could be the entity with the security interest in the public records directly. However – the smart contract would need to be able to respond to legal actions on the lien; and may incur other obligations when acting as the custodian (e.g., reporting, insurance, licenses, etc.). The asset custodian as a traditional entity vs the BBLLC dApp provides alternatives for consideration. Traditional asset custodians provide an identifiable party from whom reparations can be sought in the event of asset loss or degradation. Asset custodians are commonly held to a fiduciary standard of care. A BBLLC approach emphasizes a digital distributed trust model; BBLLC’s, however, may be challenged with off-chain enforcement actions and physical custody operations (e.g., physical asset audits). BBLLC custodians may require insurance to protect asset owners in the event of asset losses/degradation.   

If ownership of an asset, such as a building, is split among thousands of people, there is little incentive for owners to bear the costs associated with that asset, such as maintenance and ensuring rent is collected [Deloitte 2018]. One can certainly imagine a smart contract detecting that rent has not been credit to an account, but what then can be done in terms of off-chain enforcement? While IoT blockchains can enable significant cyberphysical functionality, typical landlord capabilities of self-help and legal dispossessory actions would seem technically difficult or socially problematic. Some classes of contracts requiring off-chain enforcement actions may not be a good fit for complete implementation by dApp smart contracts at this stage; and may still require human physical agents or other legal entities for some actions.

Because the transaction of tokens is completed with smart contracts, certain parts of the exchange process are automated. For some classes of transactions, this automation can reduce the administrative burden involved in buying and selling, with fewer intermediaries needed, leading to not only faster deal execution, but also lower transaction fees [Deloitte 2018]. Elimination of intermediaries sounds financially efficient; eliminating all intermediaries, however, may not be wise for some classes of assets. An intermediate entity may be useful as a liability shield for remote owners. Consider a tokenized mobile asset (e.g., a drone or terrestrial vehicle) owned and operated via a smart contract, which injures another or their property; most remote owners would insist on some limited liability entity or insurance. While smart contract operated vehicles may not be computationally feasible in the short term, even immobile asset classes like real estate can result in liabilities for the owner (e.g., premises slip and fall).  The point being that for some set of physical asset classes, the existence of at least one intermediate entity for the purpose of liability shielding may be desirable.   The actions of smart contracts on public blockchains may also raise privacy concerns 

By tokenizing financial assets—especially private securities or typically illiquid assets—these tokens can be then be traded on a secondary market of the issuer’s choice, enabling greater portfolio diversification, and capital investment in otherwise illiquid assets. Tokenization could open up investment in assets to a much wider audience through reduced minimum investment amounts and periods. Tokens can be highly divisible, meaning investors can purchase tokens that represent incredibly small percentages of the underlying assets. If each order is cheaper and easier to process, it will open the way for a significant reduction of minimum investment amounts [Deloitte 2018]. Token markets to date have often been via exempt ICOs that are restricted to accredited investors, minimizing regulatory filings, etc. Investment minimums are unlikely a major driver for accredited investors, though enabling investment in diverse, but otherwise illiquid asset classes may be of interest for portfolio diversification. Enabling liquidity for mass market investors would require security token investments to meet the necessary higher regulatory standards for filings and disclosures to bring those investments to the larger public markets. Smart contracts offer efficient process automation for trading and other transactions based on tokenized assets. While this can provide market efficiencies, not all asset classes are ready for tokenization without further consideration. Smart contracts may also need to take on additional behaviors to reflect the increased importance of their role in administering assets. Asset administration using smart contracts is emerging as a viable mechanism for digitized or tokenized assets.

References

[Alharby 2018] M. Alharby, et. al., “Blockchain-based smart contracts: A systematic mapping study of academic research (2018).” Proc. Int’l Conf. on Cloud Computing, Big Data and Blockchain. 2018.

[Bartoletti 2017] M. Bartoletti & L. Pompianu. “An empirical analysis of smart contracts: platforms, applications, and design patterns.” International conference on financial cryptography and data security. Springer, Cham, 2017.

[Buterin 2014] V. Buterin, “A next-generation smart contract and decentralized application platform.” white paper 3 (2014): 37.

[Cai 2019] T.Cai, et al. “Analysis of Blockchain System With Token-Based Bookkeeping Method.” IEEE Access 7 (2019): 50823-50832.

[Clack 2019] C. Clack, & C. McGonagle. “Smart Derivatives Contracts: the ISDA Master Agreement and the automation of payments and deliveries.” arXiv preprint arXiv:1904.01461 (2019).

[Deloitte 2018] Deloitte, “The tokenization of assets is disrupting the financial industry. Are you ready?  “ (2018)

[Entriken 2018] W. Entriken, et.al.,  ERC 721 Non-Fungible Token Standard (2018)

[Fedosov 2018] A. Fedosov, et. al., “Sharing physical objects using smart contracts.” Proceedings of the 20th Int’l Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct. ACM, 2018.

[FINMA 2018] FINMA Guidelines for enquiries regarding the regulatory framework for initial coin offerings (ICOs), Report, Swiss Financial Market Supervisory Authority.

[Hong 2019] S. Hong, et. al., “Design of Extensible Non-Fungible Token Model in Hyperledger Fabric.” Proc. of the 3rd Workshop on Scalable and Resilient Infrastructures for Distributed Ledgers. ACM, 2019.

[ISO 2019] ISO, “Blockchain and distributed ledger technologies — Terminology”, ISO/DIS 22739:2019

[Krishna 2004] P. Krishna, et.al., “An EREC framework for e-contract modeling, enactment and monitoring.” Data & Knowledge Engineering51.1 (2004): 31-58.

[Lo 2019] Y. Lo, et. al., “Assets on the Blockchain: An Empirical Study of Tokenomics.” Available at SSRN 3309686 (2019).

[Migliorini 2019] S. Migliorini, et. al., “The Rise of Enforceable Business Processes from the Hashes of Blockchain-Based Smart Contracts.” Enterprise, Business-Process and Information Systems Modeling. Springer, Cham, 2019. 130-138.

[Morrow 2019] M. Morrow, & M. Zarrebini. “Blockchain and the Tokenization of the Individual: Societal Implications.” Future Internet 11.10 (2019): 220.

[Mouloud 2019] K. Mouloud, “Blockchain in the Music Industry: A Study of Token Based Music Platforms” S. Diss. Aalborg University, 2019.

[Soumin 2018] S. Somin, et. al., “Network analysis of erc20 tokens trading on ethereum blockchain.” International Conference on Complex Systems. Springer, Cham, 2018.

[Regner 2019] F. Regner, et.al., “NFTs in Practice: Non-Fungible Tokens as Core Component of a Blockchain-based Event Ticketing Application.” (2019).

[TTI 2019] Token Taxonomy Initiative“Token Taxonomy Framework Overview”, Nov 2019

[Uzoski 2019] Uzsoki, David. “Tokenization of Infrastructure.” (2019).

[Victor 2019] F. Victor, & B. Lüders, “Measuring Ethereum-based ERC20 token networks.” International Conference on Financial Cryptography and Data Security. Springer, Cham, 2019.

[Vogelsteller 2015] F.Vogelsteller & V. Buterin,  ERC-20 Token Standard (2015)

Autonomy of software Agents

There is some overlap between the concepts of autonomy in software agents and robots; and another overlap between the autonomy of software agents and DAOs. The concept of autonomous software agents has been around for more than 20 years. In 1996, [Franklin 1996] proposed a taxonomy of autonomous agents (biological, robotic or computational) and defined software agents as a type of computational agent (distinguished from artificial life), but further categorized into task specific agents, entertainment agents or viruses. Note that this defines agents as being autonomous; but does not provide a notion of the level of autonomy. In 1999, [Heckman 1999] reviewed potential liability issues arising from autonomous agent designs. Around the same time, [Barber 1999] quantitatively defined the degree of autonomy as an agent’s relative voting weight in decision-making. Reviewing autonomous agents in 2000, [Dowling 2000] considered the delegation of any task to a software agent as raising questions in relation to its autonomy of action and decision, the degree of trust which can be vested in the outcomes it achieves, and the location of responsibility, both moral and legal, for those outcomes; but did not couple responsibility for decisions with levels of autonomy.

In 2003, [Braynov 2003] considered autonomy a relative concept depending on what a user (or another agent) expects from an agent; defining autonomy as a relation including four constituents: (1) The subject of autonomy: the entity (a single agent or a group of agents), which acts or makes decisions; (2) The object of autonomy. The object of autonomy could be a goal or a task that the subject wants to perform or to achieve. It could also be a decision that the subject wants to make; (3) The affector of autonomy: the entity that has an impact on the subject’s decisions and actions, thereby affecting the final outcome of the subject’s behavior. The affector could be the physical environment, another agent (including the user), or group of agents. The affector could either increase or decrease the autonomy of the subject; (4) Performance measure: a measure of how successful the subject is with respect to the object of autonomy. Around that time, [Brazier 2003] was considering whether agents could close contracts, and if so how liabilities might be allocated. While autonomy was seen as a complex relationship involving decision making, and allocation of liabilities was seen as important, liability was not considered a dimension of autonomy. [Braynov 2003]’s object of autonomy included decision making, it also includes broader topics like goals.

In 2017, [Pagallo 2017] provides a historical perspective on the liability issues arising from automation through to autonomous systems, recognizing that historical approaches may be insufficient for the current challenges of technology. Considering levels of autonomy in agents from an ethical perspective, [Dyrkolbotyn 2017] identified 5 “levels” of autonomy to pinpoint where a morally salient decision belongs on the following scale: (i)  Dependence or level 1 autonomy: The behavior of the system was predicted by someone with a capacity to intervene; (ii)  Proxy or level 2 autonomy: The behavior of the system should have been predicted by someone with a capacity to intervene; (iii)  Representation or level 3 autonomy: The behavior of the system could have been predicted by someone with a capacity to intervene; (iv)  Legal personality or level 4 autonomy: The behavior of the system cannot be explained only in terms of the systems design and environment. These are systems whose behavior could not have been predicted by anyone with a capacity to intervene; (v) Legal immunity or level -1: The behavior of the system counts as evidence of a defect. Namely, the behavior of the system could not have been predicted by the system itself, or the machine did not have a capacity to intervene. Also around this time; [Millard 2017] was concerned with the need for clarity concerning liabilities in the IoT space. In this period the need for change in existing liability schemes was starting to be recognized, and the level definition of [Dyrkolbotyn 2017]’s autonomy scale based on decisions included notions of legal consequences of decision making. 

In 2019, [Janiesch 2019] surveyed the literature on autonomy in the context of IoT agents identifying 20 category definitions for levels of autonomy, based on whether the human or machine is performing the decision making, and 9 different dimensions or types of agent autonomy: interpretation, know-how, plan, goal, reasoning, monitoring, skill, resource and condition. They also identified 12 design requirements for autonomous agents and proposed an autonomy model and language. [Lebeuf 2019] proposed a definition for software (ro)bots as an interface paradigm including command line, graphical, touch, written/spoken language or some combination of interfaces; and also proposed a faceted taxonomy for software (ro)bots based on facets of (ro)bot environment, intrinsic characteristics and interaction dimensions. The boundaries of these different types of software entities have become blurrier again with less consensus on the meaning of level of autonomy in the context of software entities.

Back in 2003, [Braynov2003] had noted that it was obvious that between the lack of autonomy and the complete autonomy there was a wide range of intermediate states that describe an agent’s ability to act and decide independently. A common thread around decision making as a basis for autonomy levels emerges from [Barber 1999], [Dowling 2000], [Braynov 2003] and [Dyrkolbotyn 2017], but not a consensus on a particular metric. The recent recognition of the potential for changes in the legal liability regimes to better reflect software agents brings to mind [Myhre 2019]’s assertion of accountability as a measure of autonomy. Whether through action or inaction, software agents may potentially cause injuries to people or their property. For purely software agents with no cyberphysical aspects, those injuries would have to be informational in nature (e.g. privacy violations, slander etc.). While liability or accountability for autonomous decision making, may not be the only useful dimension for autonomy in software agents, it does have some practical commercial value in quantifying risks thus enabling more commercial activities based on software agents to proceed.  

References

[Barber 1999] S. Barber, & C. Martin,  “Agent Autonomy: Specification, Measurement, and Dynamic Adjustment” In Proceedings of the Autonomy Control Software Workshop, Agents ’99, pp. 8-15. May 1-5, 1999, Seattle, WA. 

 [Braynov 2003] S. Braynov, & H. Hexmoor. “Quantifying relative autonomy in multiagent interaction.” Agent Autonomy. Springer, Boston, MA, 2003. 55-73.

[Brazier 2003] F. Brazier, et al. “Can agents close contracts?.”(2003).

[Dowling 2000] C. Dowling, “Intelligent agents: some ethical issues and dilemmas.” Selected papers from the second Australian Institute conference on Computer ethics. Australian Computer Society, Inc., 2000.

[Dyrkolbotn 2017] S.Dyrkolbotn, et.al., “Classifying the Autonomy and Morality of Artificial Agents.” CARe-MAS@ PRIMA. 2017.

[Franklin 1996] S. Franklin, & A.Graesser. “Is it an Agent, or just a Program?: A Taxonomy for Autonomous Agents.” International Workshop on Agent Theories, Architectures, and Languages. Springer, Berlin, Heidelberg, 1996.

[Heckman 1999] C. Heckman, & J. Wobbrock. “Liability for autonomous agent design.” Autonomous agents and multi-agent systems 2.1 (1999): 87-103.

[Janiesch 2019] C. Janiesch, et al. “Specifying autonomy in the Internet of Things: the autonomy model and notation.” Information Systems and e-Business Management 17.1 (2019): 159-194.

[Lebeuf 2019] C. Lebeuf, et al. “Defining and classifying software bots: a faceted taxonomy.” Proceedings of the 1st International Workshop on Bots in Software Engineering. IEEE Press, 2019.

[Millard 2017] C. Millard, et. al., “Internet of Things Ecosystems: Unpacking Legal Relationships and Liabilities.” 2017 IEEE International Conference on Cloud Engineering (IC2E). IEEE, 2017.

[Myhre 2019] B. Myhre, et.al., “A responsibility-centered approach to defining levels of automation.” Journal of Physics: Conference Series. Vol. 1357. No. 1. IOP Publishing, 2019.

[Pagallo 2019] U. Pagallo, “From Automation to Autonomous Systems: A Legal Phenomenology with Problems of Accountability.” IJCAI. 2017.

Autonomy of Robots

There is some overlap between definitions of autonomous mobile devices and definitions of robots: some devices may meet both definitions, though non-mobile robots would not be covered by the previous definitions. Many of the notions of autonomy for autonomous mobile devices  were proposed in terms of mobility tasks or challenges. Autonomy in the context of robots has a number of different interpretations. [Richards2016] defined robots as nonbiological autonomous agents (which defines them as being inherently autonomous) and more specifically asa constructed system that displays both physical and mental agency but is not alive in the biological sense. Considering autonomy in the context of human-robot-interactions, [Beer 2014] proposed a definition for autonomy as: the extent to which a robot can sense its environment, plan based on that environment, and act upon that environment with the intent of reaching some task-specific goal (either given to or created by the robot) without external control. [IEC 2017] similarly defines autonomy as capacity to monitor, generate, select and execute to perform a clinical function with no or limited operator intervention, and proposes guidelines to determine the degree of autonomy. [Huang 2019] similarly asserts that autonomy represents the ability of a system in reacting to changes and uncertainties on the fly. [Luckcuck 2019] defines an autonomous system as an artificially intelligent entity that makes decisions in response to input, independent of human interaction. Robotic systems are physical entities that interact with the physical world. Thus, an autonomous robotic system could be defined as a machine that uses artificial intelligence and has a physical presence in and interacts with the real world. [Antsaklis 2019] proposed as a definition: If a system has the capacity to achieve a set of goals under a set of uncertainties in the system and its environment, by itself, without external intervention, then it will be called autonomous with respect to the set of goals under the set of uncertainties. [Norris 2019] uses the decision-making role to distinguish between automated, semi-Autonomous and Autonomous systems where human operators control automated systems; machines (computers) control autonomous systems; and both are engaged in the control of semi-autonomous systems. As we have seen, when people refer to autonomous systems they often mean different things. The scope of automated devices considered as robots is also quite broad ranging across service devices like vacuum cleaners, social robots and precision surgical robots [Moustris 2011]; common behaviors across this range of robots seems unlikely. Intelligence is already difficult to define and measure in human, let alone artificial intelligence. The concept of a robot seems to be intertwined with the notion of autonomy; but defining autonomy as behavior, or artificial intelligence, does not seem to add much clarity to the nature of autonomy itself. The decision-making role notion of autonomy seems more consistent with dictionary definitions of autonomy based around concepts of “free will”.

There are a wide variety of types of robots intended for different applications including some (not necessarily anthropomorphic) designs intended for more social applications; developing a consistent scale of autonomy across these applications seems difficult. Human social interactions with robots also bring the distinction between objective measurements of autonomy and human perceptions of autonomy (see e.g., [Barlas 2019]). Autonomous robots are being considered in cooperative work arrangements with humans. In such a context the activity coordination between the robot and the human could become much more complex with task leadership passing between the two. (See e.g. [Jiang 2019]). Future controversies over the connotation of social robots is likely to concern their sociality and autonomy rather than their functionality [Sarrica 2019].  [Yang 2017] introduced a 6-level classification of autonomy in the context of medical robotics:

  • Level 0 – No Autonomy
  • Level 1 – Robot Assistance
  • Level 2– Task Autonomy
  • Level 3 – Conditional Autonomy
  • Level 4 – High Autonomy 
  • Level 5 – Full Autonomy

This six-level autonomy scale seems reminiscent of the scales proposed for autonomous mobility. While at first glance, it may seem attractive as an autonomy scale, the categories proposed seem rather ambiguous – e.g. full autonomy to one observer, may be merely a single task for another. In contrast, [Ficuciello 2019] separated out a notion of Meaningful Human Control (MHC) in the context of surgical robotics, and proposed a four-level classification:

  • Level 0 MHC – Surgeons govern in master-slave control mode the entire surgical procedure, from data analysis and planning, to decision-making and actual execution. 
  • Level 1 MHC – humans must have the option to override robotic corrections to their actions, by enacting a second-level human control overriding first-level robotic corrections. 
  • Level 2 MHC – humans select a task that surgical robots perform autonomously. 
  • Level 3 MHC – humans “to select from among different strategies or to approve an autonomously selected strategy 

[Beer 2014]’s definition described behaviors that autonomous systems may engage in but does not provide a scale or measurement approach for the degree of autonomy in a particular system. The taxonomies of [Yang 2017], [IEC2017] and [Ficuciello 2019] are defined externally to the autonomous robot system (e.g., in terms of the level of operator oversight). [Ficuciello 2019]’s insight could equally be applied to autonomous mobile devices where a number of the proposed taxonomies could be interpreted as scales of human control rather than device autonomy. [Beer 2014]’s definition based on behavior has the advantage that it is observable; and observation of behavior does not always imply the existence of a rational autonomous decision causing that behavior. [Luckcuck 2019], [Antsaklis 2019] and [Norris 2019] define autonomy in terms of artificial intelligence, goal seeking and decision making. While goals and decisions can be explained if they exist, much of recent technology trends have emphasized artificial intelligence techniques such as machine learning, that are not easily amenable to providing explanations. Articulating goals and decisions across a broad range of robot application domains seems rather difficult. 

It is important to be more precise and agree upon a common definition for autonomy. Could [Myhre 2019]’s definition of autonomy be applicable to the broader category of robots?  Recall this definition of autonomy requires acceptance of liability, and ideally a quantification of that liability in monetary terms. Mobile robots could incur many of the same types of liabilities of other autonomous mobile devices.  Non-mobile robots can’t cause a collision with other people or their property as this category of autonomous robot devices are not moving. But immobility does not prevent other causes of liability. Consider immobile robots intended for social interactions with humans speaking information that other people could hear; this might result in liability for privacy violations, slander, etc. Quantifying these liabilities for interactions between humans is already difficult, but not impossible; hence it is reasonable to expect that autonomous robots could be held to similar liability quantification standards. Across a broad range of application domains, robots could be a cause of injuries of various sorts to humans and their property resulting in potential liability.  If a robot is interfacing with the real world it is difficult to envision a scenario where all potential liabilities are impossible. Even a passively sensing robot could potentially result in some liability for privacy violation. Hence the approach of defining and scaling autonomy in terms of the range of acceptable accountability or liability seems applicable to a broad range of robots. 

References

[Antsaklis 2019] P. Antsaklis, . “Defining Autonomy and Measuring its Levels Goals, Uncertainties, Performance and Robustness.” ISIS (2019): 001.

[Barlas 2019] Z. Barlas,  “When robots tell you what to do: Sense of agency in human-and robot-guided actions.” (2019).

[Beer 2014] J. Beer, et.al., “Toward a framework for levels of robot autonomy in human-robot interaction.” Journal of human-robot interaction 3.2 (2014): 74-99.

[Ficuciello 2019] F. Ficuciello, et al. “Autonomy in surgical robots and its meaningful human control.” Paladyn, Journal of Behavioral Robotics 10.1 (2019): 30-43.

[Huang 2019] S. Huang, et. al., “Dynamic Compensation Framework to Improve the Autonomy of Industrial Robots.” Industrial Robotics-New Paradigms. IntechOpen, 2019. 

[IEC 2017] IEC 60601-4-1:2017Medical electrical equipment — Part 4-1: Guidance and interpretation — Medical electrical equipment and medical electrical systems employing a degree of autonomy (2017)

[Jiang 2019] S. Jiang, “A Study of Initiative Decision-Making in Distributed Human-Robot Teams.” 2019 Third IEEE International Conference on Robotic Computing (IRC). IEEE, 2019.

[Luckcuck 2019] M. Luckcuck, et.al. “Formal Specification and Verification of Autonomous Robotic Systems: A Survey”, ACM Computing Surveys. Sep2019, Vol. 52 Issue 5, p1-41.

[Moustris 2011] G. Moustris, et al. “Evolution of autonomous and semi‐autonomous robotic surgical systems: a review of the literature.” The international journal of medical robotics and computer assisted surgery 7.4 (2011): 375-392.

[Myhre 2019] B. Myhre, et.al., “A responsibility-centered approach to defining levels of automation.” Journal of Physics: Conference Series. Vol. 1357. No. 1. IOP Publishing, 2019.

[Norris 2019] W. Norris, & A. Patterson. “Automation, Autonomy, and Semi-Autonomy: A Brief Definition Relative to Robotics and Machine Systems.” (2019).

[Richards 2016] N. Richards, & W. Smart., “How should the law think about robots?.” Robot law. Edward Elgar Publishing, 2016.

[Sarrica 2019] M. Sarrica, et.al., “How many facets does a “social robot” have? A review of scientific and popular definitions online.” Information Technology & People (2019).

[Yang 2017] G.Yang, et al. “Medical robotics—Regulatory, ethical, and legal considerations for increasing levels of autonomy.” Sci. Robot 2.4 (2017): 8638.

Autonomy Levels In Mobile Devices

Autonomy is relevant in many different activities; and has recently received a lot of attention in the context of autonomous mobility of cyber-physical devices. A broad range of mobile physical devices are being developed for a variety of different geographic environments (Land, Sea, Air). These devices have various capabilities of internal controls often described with a ‘Levels of Autonomy’ taxonomy. Autonomous mobility is largely concerned with the problems of navigation and avoidance of various hazards. In many cases, these autonomous mobile devices are proposed as potential replacements for human operated vehicles which creates some uncertainty regarding potential liability arrangements when devices have no human operator. The levels of autonomy proposed for autonomous mobility might provide some insight for levels of autonomy in other contexts e.g., DAOs.

Levels of Land Mobile Device Autonomy 

Automobile-specific regulations and more general tort liabilities have established a body of law where the driver is in control of the vehicle, but this is challenged by automation culminating in autonomous vehicles [Gasser 2013]. In their more recent review of autonomous vehicles, [Taeihagh 2019] categorized technological risks and reviewed potential regulatory or legislative responses in US and Europe; where the US has been introducing legislations to address autonomous vehicle issues related to privacy and cybersecurity; While the UK and Germany, in particular, have enacted laws to address liability issues; other countries mostly acknowledge these issues, but have yet to implement specific strategies.  Autonomous vehicles are expected to displace manually controlled vehicles in various applications becoming an increasingly significant portion of the vehicular traffic on public roads over time. [SAE2018] Defines 6 levels of automation:

  • Level 0 – No Driver Automation – May provided momentary controls (e.g. ,emergency intervention).
  • Level 1 – Driver Assistance – Sustained, specific performance of a subtask 
  • Level 2 – Partial Driving Automation – sustained specific execution with the expectation that the driver is actively supervising the operation.
  • Level 3 – Conditional Driving Automation – sustained, specific performance with the expectation that the user is ready to respond to requests to intervene as well as to performance failures 
  • Level 4 – High Driving Automation – Sustained, specific performance without the expectation that the user will respond to a request to intervene
  • Level 5 – Full Driving Automation – Sustained an unconditional performance without any expectation of a user responding to a request for intervention 

Autonomous vehicles on off-road applications add another layer of complexity for route planning, steeper slopes, hazard recognition, etc. [Elhannouny 2019]. The levels of autonomy  for land mobile autonomous systems in this the taxonomy are sequenced as decreasing levels of human supervision.

Levels of Sea Mobile Device Autonomy 

Maritime autonomous systems pose many additional challenges to their designers as the environment is more hostile and the device may be more remote than land mobile autonomous systems etc. Without a well-defined safety standards and efficient risk allocation, it is hard to assign liability and define insurance or compensation schemes leaving unmanned surface ships potentially subject to a variety of different liability regimes [Ferriera 2018]. Unmanned maritime systems have been in operation for more than 20 years [Lantz 2016] and more frequent interactions between manned and unmanned systems are expected in future.  The International maritime Organization [IMO 2019] recently initiated a regulatory scoping review of the conventions applicable to Maritime Autonomous Surface Ships.  The Norwegian Forum for Autonomous Ships [NFAS 2017] proposed a four-level classification of autonomy:

  • Decision Support – Crew continuously in command of vessel
  • Automatic – pre-programmed sequence requesting human intervention when unexpected events occur or when sequence completes
  • Constrained Autonomous – fully automatic in most situations but human operators continuously available when requested by the system 
  • Fully Autonomous – no human crew or remote operators. 

As with land-based mobility autonomy, the levels of this taxonomy for sea mobile autonomous systems are defined and sequenced as decreasing levels of actions by human operators.

Levels of Air Mobile Device Autonomy 

The air environment adds an additional degree of freedom (vertical) for navigation, and introduces different navigation hazards (e.g. birds, weather) compared to land or sea environments. Human operated air mobile devices have in the past been relatively large, expensive and heavily regulated. Smaller lower cost drones have enabled more consumer applications.  In the US, the FAA has jurisdiction over drones for non-recreational use [FAA 2016], though an increasing number of states [NACDL  2019] are also promulgating regulations in this area. Potential liabilities include not just damages to the device, but also liabilities from collisions with fixed infrastructure, personal injuries, trespass, etc.

[Clough 2002] proposed a 10-level classification of autonomous control levels:

  • Level 0 – remotely piloted vehicle
  • Level 1 – execute preplanned mission
  • Level 2 – Pre-loaded alternative plans
  • Level 3 – Limited Response to Real Time Faults/ Events
  • Level 4 – Robust response to anticipated events
  • Level 5 – Fault/ Event adaptive vehicle
  • Level 6 – Real Time Multivehicle Coordination 
  • Level 7 – Real Time multi vehicle cooperation
  • Level 8 – Multi-Vehicle mission performance optimization
  • Level 9 – Multi-Vehicle tactical performance optimization 
  • Level 10 – Human Like

More recently, [Huang 2005] proposed autonomy levels for unmanned systems along three dimensions: Mission complexity, Human Independence and Environmental difficulty. The levels of this autonomous air mobility taxonomy are defined and sequenced as increasing complexity of the scenario and its implied computational task.

Consolidated views of mobility autonomy levels

[Vagia 2019] reviewed levels of automation across different transportation modes and proposed a common faceted taxonomy based on facets of human-automation interface, environmental complexity, system complexity and societal acceptance. [Mostafa 2019] points out that the level of autonomy may be dynamically adjustable, given some situational awareness. In their literature review of levels of automation, [Vagia 2016] noted that some authors refered to “autonomy” and “automation” interchangeably, and proposed an 8-level taxonomy for levels of automation:

  • Level 1 – Manual control
  • Level 2 – decision Proposal 
  • Level 3 – human Selection of decision
  • Level 4 – computer selects decision with human approval
  • Level 5 – computer executes selected decision and informs the human
  • Level 6 – computer executes selected decision and informs the human only if asked
  • Level 7 – computer executes selected decision and informs the human only if it decides to
  • Level 8 – autonomous control informs operators only if out of specification error conditions occur.

This consolidated view of an autonomous mobility taxonomy is defined and structured as in terms of the decision role taken by the computer system.

Taxonomies provide a means to classify instances, but the intended uses of the categorization limit the value it provides. For example, the [SAE 2018] classification may be useful in setting consumer expectations, and the developers of the other taxonomies proposed no doubt had other purposes in mind during their creation. One consideration is whether the proposed categorizations are useful in terms of the intended benefits from implementing some form of autonomy. [Araguz 2019] identified the need for (or benefits from) autonomy from the literature as improving performance in systems with high uncertainties; ameliorating responsiveness, flexibility and adaptability in dynamic contexts; and ultimately handling complexity (especially in large-scale systems), and in the aerospace field as a means to improve reliability and tolerance to failures. Defining a level of autonomy taxonomy based on the amount of operator control required, or the complexity of the scenario/ computational tasks arguably speaks toward those dimensions of intended benefits.

Presenting levels of autonomy as a numbered sequence of levels implies not just a classification taxonomy, but some quantification of an autonomy continuum, rather than a set of discrete and non-contiguous classification buckets. Most of these taxonomy proposals also leave full autonomy as a speculative category awaiting future technology advancements (e.g., in Artificial Intelligence) and do not provide existence proofs that meet the category definition. While to some autonomy may imply some degree of implementation complexity, the converse is not always true; increasing levels of complexity do not necessarily lead to autonomy. Constructing an autonomy taxonomy based on the decision roles taken by the computing systems is more interesting because it focusses on the actions of the system itself rather than external factors (e.g., scenario complexity, other actors). The decision roles themselves, however, are discrete categories, rather than a linear scale; and are independent of the importance of the decisions being made. 

 [Myhre 2019] reviewed autonomy classifications for land and maritime environments and proposed that a system is considered autonomous if it can legally accept accountability for an operation thereby assuming the accountability that was previously held by either a human operator or another autonomous system; thus classifying systems as autonomous or not. With non-autonomous systems, liability generally lies with the operator; with autonomous systems, liability generally shifts back to the designer. The scope of accountability or liability in the event of some loss is often described by a monetary value. Insurance contracts are often used to protect against potential liabilities up to some specified monetary level. Monetary values have the advantage of providing a simple linear scale. A system that can accept a $1M liability is thus arguably more autonomous than one that could only accept a $1,000 liability.   

References

[Araguz 2019] C. Araguz, et al. “A Design-Oriented Characterization Framework for Decentralized, Distributed, Autonomous Systems: The Nano-Satellite Swarm Case.”  International Symposium on Circuits and Systems (ISCAS). IEEE, 2019.

[Clough 2002] B. Clough,  “Metrics, schmetrics! How the heck do you determine a UAV’s autonomy anyway.” AIR FORCE RESEARCH LAB WRIGHT-PATTERSON AFB OH, 2002.

[Elhannouny 2019] E. Elhannouny, & D. Longman. Off-Road Vehicles Research Workshop: Summary Report. No. ANL-18/45. Argonne National Lab.(ANL), Argonne, IL (United States), 2019.

[FAA 2016] FAA “Operation and Certification of Small Unmanned Aircraft Systems”, 81 Fed. Reg. 42064, 42065-42066 (rule in effect 28 June 2016); 14 C.F.R. Part 107.

[Ferreira 2018] Ferreira, Fausto, et al. “Liability issues of unmanned surface vehicles.” OCEANS 2018 MTS/IEEE Charleston. IEEE, 2018.

 [Gasser 2013] T.Gasser, et al. “Legal consequences of an increase in vehicle automation.” Bundesanstalt für Straßenwesen (2013).

[Huang 2015] H. Huang, et al. “A framework for autonomy levels for unmanned systems (ALFUS).” Proceedings of the AUVSI’s Unmanned Systems North America (2005): 849-863.

[IMO 2019] IMO Legal Committee, 106thSession,27-29 March 2019. 

[Lantz 2016] J.Lantz, Letter to IMO in response to circular 3574, USCG, 16707/IMO/C 116, Jan 2016

[Mostafa 2019] S. Mostafa, et. al., “Adjustable autonomy: a systematic literature review.” Artificial Intelligence Review 51.2 (2019): 149-186.

[Myhre 2019] B. Myhre, et.al., “A responsibility-centered approach to defining levels of automation.” Journal of Physics: Conference Series. Vol. 1357. No. 1. IOP Publishing, 2019.

[NACDL 2019] National Association of Criminal Defense Lawyers, “Domestic Drones Information center” retrieved 12/31/2019.

[NFAS 2017] Norwegian Forum for Autonomous Ships “Definitions for Autonomous Merchant Ships”  2017

[SAE 2018] SAE, “Surface Vehicle Recommended Practice: Taxonomy and Definitions for Terms related to riving Automation Systems for On-Road Motor Vehicles” (Revised ) June 2018

[Taeihagh 2019] A. Taeihagh, & H. Lim. “Governing autonomous vehicles: emerging responses for safety, liability, privacy, cybersecurity, and industry risks.” Transport reviews 39.1 (2019): 103-128.

[Vagia 2016] M. Vagia, et.al., “A literature review on the levels of automation during the years. What are the different taxonomies that have been proposed?.” Applied ergonomics 53 (2016): 190-202.

[Vagia 2019] M. Vagia, & O. Rødseth. “A taxonomy for autonomous vehicles for different transportation modes.” Journal of Physics: Conference Series. Vol. 1357. No. 1. IOP Publishing, 2019.

Decentralized Autonomous Organizations

A Decentralized Autonomous Organization (DAO), (Sometimes also referred to a Decentralized Autonomous Corporation (e.g. [Kypriotaki 2015])) is a type of smart contract executing as an autonomous organizational entity in the context of a blockchain. Note that the “smart” in smart contract does not require or imply the use of AI (though it is also not prohibited), it refers to automation in the design and execution of legally enforceable contracts. DAOs were originally envisioned as a pseudo-legal organization run by an assemblage of human and robot participants [Vitalik 2013a], [Vitalik 2013b], [Vitalik 2013c]. More recently [Wang 2019] proposed a definition of a DAO as a blockchain-powered organization that can run on its own without any central authority or management hierarchy, and a five-layer architecture reference model. The legal status of DAOs is continuing to evolve. In the absence of specific enabling legislation, There have been proposals that DAOs be considered a trust administrator [Jentzsch 2015] , or a form of (implied) partnership [Zetsche 2018], or be construed as falling within existing LLC enabling statues [Bayern 2014], or requiring new enabling legislation for Cryptocorporations [Nielsen 2018]. Meanwhile, some states (e.g., Vermont [Vermont 2018]) have created enabling legislation recognizing blockchain based LLCs.

DAOs have been proposed in a number of different applications. [Zichichi 2019] proposed a DAO for crowdfunding applications. [Mylrea 2019] proposed a blockchain entity similar to a DAO in the context of energy markets. [Miscione 2019] reviewed blockchain as an organizational technology in the context of both markets and public registries. [Calcaterra 2019] proposed a DAO for underwriting insurance. [Diallo 2018] proposed DAO applications in e-government. DAOs are built on top of blockchain smart contracts. The scope of smart contracts can include off-chain computations, and a wide variety of token types and cyberphysical devices (see e.g., [Wright 2019]). Smart contracts can be used in the execution of a broad array of contracts.

An organization needs to be able to adapt to changes in its environment. Such adaptation in a software entity requires configuration control, mechanisms to clean up old versions etc. This is complicated in the case of blockchains where the software code may be stored as part of the “immutable” block. A key decision point in such adaptations lies in the selections of adaptation to support. In the context of decentralized systems, this is complicated because of multiple parties interacting in a decentralized fashion. This decision process in the evolution of a DAO is generally described in the literature as the governance mechanism for the DAO. Some blockchain architectures (e.g., Tezos) support governance mechanisms inherently, others would require the governance to be build into a higher layer dapp – in this case a DAO. The need for such governance became widely recognized in the aftermath of an early DAO implementation. [Jantzsch 2015] implemented[1] a DAO as smart contracts on Ethereum. After the reentrancy exploit in 2016 of a coding flaw resulted in considerable financial loss, the flaw was resolved through a hard fork [Mehar 2019].  [Avital 2019] identified three classes of governance mechanisms: control, coordination, and realignment, and compared governance practices in centralized and decentralized organizations; arguing that interpretations of governance mechanisms in near-autonomous or stigmergic (indirectly coordinated) forms of a decentralized organization require a theoretical taxonomy emphasizing process over structure. [Notra 2015] proposed a governance as a service model for collaborating DAOs. Governance mechanisms are an ongoing challenge for blockchains, not just DAOs, but DAOs as a legal entity require additional consideration of who is legally authorized to make such upgrades.

The notion of a DAO as a new paradigm for organizational design has sparked the imagination of some commentators (e.g. [Hsieh 2018], [Wooley 2016], [Koletsi 2019]), but the underlying performance and capacity of many current blockchain protocols remain technical impediments to that vision. There have been some performance improvements; for example, [Barinov 2019] introduced a Proof of Stake DAO improving performance of consensus mechanisms over prior Proof of Work blockchains. There are a variety of platforms [Clincy 2019] for implementing blockchains, smart contracts, and thus DAOs. Some claim [Hsieh 2018] that bitcoin itself is a form of DAO.  DAOs could be implemented in permissionless blockchains or permissioned blockchains [Wang 2019]. The most (in)famous example was implemented on Ethereum [Dupont 2017]. [Alimoglu 2017] proposed an organization design for DAOs featuring a crypto currency funding mechanism, a decision mechanism based on voting and released[2] an Ethereum/ Solidity implementation under a GNU GPL v3.0 license. DigixDAO[3] aims to be a self-organizing community on the Ethereum blockchain that actively involves its token holders in decision making and shaping the direction of the asset tokenization business. Members of the Ethereum community have pooled funds together to distribute grants effectively through a DAO structure (MolochDAO[4] [Soleimani 2019]). OpenLaw[5] provides a support mechanism for formation of various legal entity types for DAOs. DAOstack[6] is building an open source software stack for implementing DAOs. DatabrokerDAO [7]appears to be launching a DAO based data service. MakerDAO supports transactions in collateralized debt positions.

While some (e.g., [Lopucki 2018]) have questioned the wisdom of aligning software entities with legal entities, DAOs seem to be in various stages of implementation and operation with some color of law enabling legal recognition. Some (e.g., [Field 2020]) are predicting significant commercial DAO progress in 2020. While example DAOs have been in operation for more that 5 years now, consensus on the required functionality for DAOs is still emerging. While the decentralized nature of DAOs is perhaps unfamiliar to some, it, and the notion of a virtual entity like a corporation are widely understood. The notion of autonomy in this context could use some further elaboration, and perhaps even categorization or quantification. DAOs by definition include some notion of autonomy, but what is required for a smart contract to rise to the level of a DAO is not exactly clear. Autonomy implies some aspect of free-will and self-determination, but it is not clear that truly independent decision making is expected in the DAO applications so far. The objectives for DAO applications above seem to be in the nature of an entity that can be trusted to perform in some expected manner. The smart contracts underlying a DAO employ logical programming to execute their contractual objectives; that logical basis provides an explanation of behavior even when unexpected (e.g. due to some reentrancy flaw); and behavior explanations are often unavailable in artificial intelligence applications (e.g., those based on machine learning). Legal recognition as an entity implies not only a capacity for independent action, but also some duties (e.g. duties to responding to other legal processes, obligations toward others (e.g., shareholders, employees)). Other virtual entities (e.g., corporations) rely on designated humans to perform these functions. The argument for implementing a DAO is to minimize the human role in favor of automation. Corporations also provide a feature of limited liability. Many of the DAO applications are manipulating digital assets of considerable value and so considerations of liability if those valuable digital assets were to be lost, damaged or degraded. Where the DAO extends  smart contract to cyber-physical assets or other tokenized assets the potential liabilities may become important considerations.  So how much autonomy is then required for a DAO to be a DAO?

References

[Alimoglu 2017] A. Alimoğlu, & C. Özturan. “Design of a Smart Contract Based Autonomous Organization for Sustainable Software.”  13th Int’l Conf. on e-Science. IEEE, 2017.

 [Avital 2019] M. Avital, et. al., (2019). “Governance of Decentralized Organizations: Lessons from Ethereum.” 11th Int’l Process Symposium Organizing in the Digital Age. PROS 2019, Chania, Crete, Greece.

[Barinov 2019] I. Barinov, et al. “POSDAO: Proof of Stake Decentralized Autonomous Organization.” Available at SSRN 3368483(2019).

[Bayern 2014] S. Bayern, “Of bitcoins, Independently wealth software and the zero member LLC”, Northwestern U. Law Rev. vol 108, pp 257-270, 2014

[Buterin 2013a] V. Buterin, “Bootstrapping A Decentralized Autonomous Corporation: Part I” Bitcoin Magazine Sept 20, 2013

[Buterin 2013b] V.Buterin, “Bootstrapping An Autonomous Decentralized Corporation, Part 2: Interacting With the World”, Bitcoin Magazine, Sept. 22, 2013

[Buterin 2013c] V. Buterin, “Bootstrapping a Decentralized Autonomous Corporation, Part 3: Identity Corp”, Bitcoin Magazine, Sept 25, 2013

[Buterin 2014] V. Buterin, “DAOs, DACs, DAs and More: An Incomplete Terminology Guide”, Ethereum Blog, May 6, 2014

[Calcaterra 2019] C. Calcaterra, et.al., “Decentralized Underwriting.” Available at SSRN 3396542(2019).

[Clincy 2019] V. Clincy, & H. Shahriar. “Blockchain Development Platform Comparison.” 43rd Ann. Computer Software and Applications Conference (COMPSAC). Vol. 1. IEEE, 2019.

[Diallo 2018] N. Diallo, et al. “eGov-DAO: A better government using blockchain based decentralized autonomous organization.” 2018 International Conference on eDemocracy & eGovernment (ICEDEG). IEEE, 2018.

[DuPont 2017] Q. DuPont, “Experiments in algorithmic governance: A history and ethnography of “The DAO,” a failed decentralized autonomous organization.” Bitcoin and Beyond (Open Access). Routledge, 2017. 157-177.

[Field 2020] M.Field “The DAO Spring is coming”, Medium, Jan 1, 2020

[Hsieh 2018] Y. Hsieh, et. al. ,”Bitcoin and the rise of decentralized autonomous organizations.” J. of Organization Design7.1 (2018): 14.

[Jentzsch 2015] C. Jentzsch, “Decentralized autonomous organization to manage a trust.” White Paper (2015)

[Kypriotaki 2015] K. Kypriotaki, et.al., “From bitcoin to decentralized autonomous corporations.” International conference on enterprise information systems. 2015.

[Koletsi 2019] M. Koletsi, “Radical technologies: Blockchain as an organizational movement.” Homo Virtualis 2.1 (2019): 25-33.

[Lopucki 2018] L. Lopucki, “Algorithmic Entities”, Washington U. Law Rev., vol 95, no 4, pp 887-953, 2018

[MakerDAO 2020] MakerDAO Whitepaper https://makerdao.com/en/whitepaper/ retrieved 1/2/2020

[Mehar 2019] M. Mehar, et al. “Understanding a revolutionary and flawed grand experiment in blockchain: the DAO attack.” J. of Cases on Inf. Tech. (JCIT) 21.1 (2019): 19-32.

[Miscione 2019] G. Miscione, “Blockchain as organizational technology.” University of Zurich, Department of Informatics IFI Colloquium, Zurich, Switzerland, 28th February 2019. University College Dublin, 2019.

[Mylrea 2019] M. Mylrea, “Distributed Autonomous Energy Organizations: Next-Generation Blockchain Applications for Energy Infrastructure.” Artificial Intelligence for the Internet of Everything. Academic Press, 2019. 217-239.

[Nielsen 2018] T. Nielsen, “Cryptocorporations: A Proposal for Legitimizing Decentralized Autonomous Organizations.” Utah Law Review, Forthcoming (2018).

[Norta 2015] A. Norta, et. al., “Conflict-resolution lifecycles for governed decentralized autonomous organization collaboration.” Proc. 2nd International Conference on Electronic Governance and Open Society: Challenges in Eurasia. ACM, 2015.

[Soleimani 2019] A. Soleimani, et.al., “The Moloch DAO: Beating the Tragedy of the Commons using Decentralized Autonomous Organizations”, Whitepaper, 2019

[Vermont 2018] Vermont S.269 (Act 205) 2018 §4171-74

[Wang 2019] S. Wang, et al. “Decentralized Autonomous Organizations: Concept, Model, and Applications.” IEEE Transactions on Computational Social Systems 6.5 (2019): 870-878.

[Woolley 2016] S.Woolley,& P.Howard. “Automation, algorithms, and politics| political communication, computational propaganda, and autonomous agents Introduction.” Int’l J. of Communication 10 (2016): 9.

[Wright 2019] S. Wright, “Privacy in IoT Blockchains: with Big Data comes Big Responsibility”, Int’l Workshop on IoT Big Data and Blockchain (IoTBB’2019) in conjunction with IEEE Big Data (2019)

[Zetsche 2018] D.Zetsche, et.al., “The distributed Liability of distributed ledgers” U. Ill. L.Rev. 2018.

[Zichichi 2019] M. Zichichi, et al. “LikeStarter: a Smart-contract based Social DAO for Crowdfunding.” arXiv preprint arXiv:1905.05560 (2019).


[1] https://github.com/slockit/DAO/

[2] https://github.com/ebloc/AutonomousSoftwareOrg

[3] https://digix.global/dgd/

[4] https://molochdao.com

[5] https://dao.openlaw.io/formation

[6] https://daostack.io

[7] https://databrokerdao.com

I will be presenting a paper on ” Privacy in IoT Blockchains: with Big Data comes Big Responsibility” at the 2019 International Workshop on IoT Big Data and Blockchain (IoTBB’2019) held in conjunction with the IEEE BigData 2019 conference

Date: December 10, 2019
Time: 6:10-6:35 pm
Event: IoTBB'2019 & IEEE Big Data 2019
Venue: The Westin Bonaventure Hotel & Suites, Los Angeles
Location: 404 South Figueroa Street,
Los Angeles, California 90071
USA
Public: Public